Search results for: Pablo Javier Ortega-Rodriguez
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 200

Search results for: Pablo Javier Ortega-Rodriguez

50 Mauriac Syndrome: A Rare Complicacation With an Easy Solution

Authors: Pablo Cid Galache, Laura Zamorano Bonilla

Abstract:

Mauriac syndrome (MS) is a rare complication of type 1 diabetes mellitus (DM1). It is rela-ted to low insulin concentrations. Therefore is a complication mainly found in developing countries. The main clinical features are hepatomegaly, edema, growth and puberty delay, and the presence of elevated transaminases and serum lipids. The MS incidence is de-creasing due to the new types of insulin and intensive glycemic control. Therefore is a rare diagnosis in Europe nowadays, being described mainly in developing countries or with so-cioeconomic limitations to guarantee an adequate management of diabetes. Edema secondary to fluid retention is a rare complication of insulin treatment, especially in young patients. Its severity is variable and is mainly related to the start of a proper treatment and the improvement in glycemic control after diagnosis or after periods of poor metabolic control. Edema resolves spontaneously without requiring treatment in most cases. The Pediatric Endocrinology Unit of Hospital Motril could diagnose a 14-year-old girl who presented very poor metabolic control during the last 3 years as a consequence of the socioeconomic conditions of the country of origin during the last years. Presents up to 4 admissions for ketoacidosis during the last 12 months. After the family moved to Spain our patient began to be followed up in our Hospital. Initially presented glycated hemoglobin figures of 11%. One week after the start of treatment, the patient was admitted in the emergency room due to the appearance of generalized edema and pain in the limbs. The main laboratory abnormalities include: blood glucose 225mg/dl; HbA1C 10.8% triglycerides 543 mg/dl, total cholesterol 339 mg/dl (LDL 225) GOT 124 U/l, GPT 89U/l. Abdominal ultrasound shows mild hepatomegaly and no signs of ascites were shown. The patient presented a progressive improvement with resolution of the edema and analitical abnormalities during the next two weeks. During admission, the family received diabetes education, achieving adequate glycemic control at discharge. Nowadays the patient has a good glycemic control having glycated hemoglobin levels around 7%.

Keywords: Mauriac, diabetes, complication, developing countries

Procedia PDF Downloads 32
49 Rheumatoid Arthritis, Periodontitis and the Subgingival Microbiome: A Circular Relationship

Authors: Isabel Lopez-Oliva, Akshay Paropkari, Shweta Saraswat, Stefan Serban, Paola de Pablo, Karim Raza, Andrew Filer, Iain Chapple, Thomas Dietrich, Melissa Grant, Purnima Kumar

Abstract:

Objective: We aimed to explicate the role of the subgingival microbiome in the causal link between rheumatoid arthritis (RA) and periodontitis (PD). Methods: Subjects with/without RA and with/without PD were randomized for treatment with scaling and root planing (SRP) or oral hygiene instructions. Subgingival biofilm, gingival crevicular fluid, and serum were collected at baseline and at 3- and 6-months post-operatively. Correlations were generated between 72 million 16S rDNA sequences, immuno-inflammatory mediators, circulating antibodies to oral microbial antigens, serum inflammatory molecules, and clinical metrics of RA. The dynamics of inter-microbial and host-microbial interactions were modeled using differential network analysis. Results: RA superseded periodontitis as a determinant of microbial composition, and DAS28 score superseded the severity of periodontitis as a driver of microbial assemblages (p=0.001, ANOSIM). RA subjects evidenced higher serum anti-PPAD (p=0.0013), anti-Pg-enolase (p=0.0031), anti-RPP3, anti- Pg-OMP and anti- Pi-OMP (p=0.001) antibodies than non-RA controls (with and without periodontitis). Following SRP, bacterial networks anchored by IL-1b, IL-4, IL-6, IL-10, IL-13, MIP-1b, and PDGF-b underwent ≥5-fold higher rewiring; and serum antibodies to microbial antigens decreased significantly. Conclusions: Our data suggest a circular relationship between RA and PD, beginning with an RA-influenced dysbiosis within the healthy subgingival microbiome that leads to exaggerated local inflammation in periodontitis and circulating antibodies to periodontal pathogens and positive correlation between severity of periodontitis and RA activity. Periodontal therapy restores host-microbial homeostasis, reduces local inflammation, and decreases circulating microbial antigens. Our data highlights the importance of integrating periodontal care into the management of RA patients.

Keywords: rheumatoid arthritis, periodontal, subgingival, DNA sequence analysis, oral microbiome

Procedia PDF Downloads 81
48 Experimental Study of Reflective Roof as a Passive Cooling Method in Homes Under the Paradigm of Appropriate Technology

Authors: Javier Ascanio Villabona, Brayan Eduardo Tarazona Romero, Camilo Leonardo Sandoval Rodriguez, Arly Dario Rincon, Omar Lengerke Perez

Abstract:

Efficient energy consumption in the housing sector in relation to refrigeration is a concern in the construction and rehabilitation of houses in tropical areas. Thermal comfort is aggravated by heat gain on the roof surface by heat gains. Thus, in the group of passive cooling techniques, one of the practices and technologies in solar control that provide improvements in comfortable conditions are thermal insulation or geometric changes of the roofs. On the other hand, methods with reflection and radiation are the methods used to decrease heat gain by facilitating the removal of excess heat inside a building to maintain a comfortable environment. Since the potential of these techniques varies in different climatic zones, their application in different zones should be examined. This research is based on the experimental study of a prototype of a roof radiator as a method of passive cooling in homes, which was developed through an experimental research methodology making measurements in a prototype built by means of the paradigm of appropriate technology, with the aim of establishing an initial behavior of the internal temperature resulting from the climate of the external environment. As a starting point, a selection matrix was made to identify the typologies of passive cooling systems to model the system and its subsequent implementation, establishing its constructive characteristics. Step followed by the measurement of the climatic variables (outside the prototype) and microclimatic variables (inside the prototype) to obtain a database to be analyzed. As a final result, the decrease in temperature that occurs inside the chamber with respect to the outside temperature was evidenced. likewise, a linearity in its behavior in relation to the variations of the climatic variables.

Keywords: appropriate technology, enveloping, energy efficiency, passive cooling

Procedia PDF Downloads 76
47 Subdued Electrodermal Response to Empathic Induction Task in Intimate Partner Violence (IPV) Perpetrators

Authors: Javier Comes Fayos, Isabel Rodríguez Moreno, Sara Bressanutti, Marisol Lila, Angel Romero Martínez, Luis Moya Albiol

Abstract:

Empathy is a cognitive-affective capacity whose deterioration is associated with aggressive behaviour. Deficient affective processing is one of the predominant risk factors in men convicted of intimate partner violence (IPV perpetrators), since it makes their capacity to empathize very difficult. The objective of this study is to compare the response of electrodermal activity (EDA), as an indicator of emotionality, to an empathic induction task, between IPV perpetrators and men without a history of violence. The sample was composed of 51 men who attended the CONTEXTO program, with penalties for gender violence under two years, and 47 men with no history of violence. Empathic induction was achieved through the visualization of 4 negative emotional-eliciting videos taken from an emotional induction battery of videos validated for the Spanish population. The participants were asked to actively empathize with the video characters (previously pointed out). The psychophysiological recording of the EDA was accomplished by the "Vrije Universiteit Ambulatory Monitoring System (VU-AMS)." An analysis of repeated measurements was carried out with 10 intra-subject measurements (time) and "group" (IPV perpetrators and non-violent perpetrators) as the inter-subject factor. First, there were no significant differences between groups in the baseline AED levels. Yet, a significant interaction between the “time” and “group” was found with IPV perpetrators exhibiting lower EDA response than controls after the empathic induction task. These findings provide evidence of a subdued EDA response after an empathic induction task in IPV perpetrators with respect to men without a history of violence. Therefore, the lower psychophysiological activation would be indicative of difficulties in the emotional processing and response, functions that are necessary for the empathic function. Consequently, the importance of addressing possible empathic difficulties in IPV perpetrator psycho-educational programs is reinforced, putting special emphasis on the affective dimension that could hinder the empathic function.

Keywords: electrodermal activity, emotional induction, empathy, intimate partner violence

Procedia PDF Downloads 171
46 The Comparative Electroencephalogram Study: Children with Autistic Spectrum Disorder and Healthy Children Evaluate Classical Music in Different Ways

Authors: Galina Portnova, Kseniya Gladun

Abstract:

In our EEG experiment participated 27 children with ASD with the average age of 6.13 years and the average score for CARS 32.41 and 25 healthy children (of 6.35 years). Six types of musical stimulation were presented, included Gluck, Javier-Naida, Kenny G, Chopin and other classic musical compositions. Children with autism showed orientation reaction to the music and give behavioral responses to different types of music, some of them might assess stimulation by scales. The participants were instructed to remain calm. Brain electrical activity was recorded using a 19-channel EEG recording device, 'Encephalan' (Russia, Taganrog). EEG epochs lasting 150 s were analyzed using EEGLab plugin for MatLab (Mathwork Inc.). For EEG analysis we used Fast Fourier Transform (FFT), analyzed Peak alpha frequency (PAF), correlation dimension D2 and Stability of rhythms. To express the dynamics of desynchronizing of different rhythms we've calculated the envelope of the EEG signal, using the whole frequency range and a set of small narrowband filters using Hilbert transformation. Our data showed that healthy children showed similar EEG spectral changes during musical stimulation as well as described the feelings induced by musical fragments. The exception was the ‘Chopin. Prelude’ fragment (no.6). This musical fragment induced different subjective feeling, behavioral reactions and EEG spectral changes in children with ASD and healthy children. The correlation dimension D2 was significantly lower in autists compared to healthy children during musical stimulation. Hilbert envelope frequency was reduced in all group of subjects during musical compositions 1,3,5,6 compositions compared to the background. During musical fragments 2 and 4 (terrible) lower Hilbert envelope frequency was observed only in children with ASD and correlated with the severity of the disease. Alfa peak frequency was lower compared to the background during this musical composition in healthy children and conversely higher in children with ASD.

Keywords: electroencephalogram (EEG), emotional perception, ASD, musical perception, childhood Autism rating scale (CARS)

Procedia PDF Downloads 259
45 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation

Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst

Abstract:

There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.

Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation

Procedia PDF Downloads 167
44 Effect of Cooking Process on the Antioxidant Activity of Different Variants of Tomato-Based Sofrito

Authors: Ana Beltran Sanahuja, A. Valdés García, Saray Lopez De Pablo Gallego, Maria Soledad Prats Moya

Abstract:

Tomato consumption has greatly increased worldwide in the last few years, mostly due to a growing demand for products like sofrito. In this sense, regular consumption of tomato-based products has been consistently associated with a reduction in the incidence of chronic degenerative diseases. The sofrito is a homemade tomato sauce typical of the Mediterranean area, which contains as main ingredients: tomato, onion, garlic and olive oil. There are also sofrito’s variations by adding other spices which bring at the same time not only color, flavor, smell and or aroma; they also provide medicinal properties, due to their antioxidant power. This protective effect has mainly been attributed to the predominant bioactive compounds present in sofrito, such as lycopene and other carotenoids as well as more than 40 different polyphenols. Regarding the cooking process, it is known that it can modify the properties and the availability of nutrients in sofrito; however, there is not enough information regarding this issue. For this reason, the aim of the present work is to evaluate the cooking effect on the antioxidant capacity of different variants of tomato-based sofrito combined with other spices, through the analysis of total phenols content (TPC) and to evaluate the antioxidant capacity by using the method of free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH). Based on the results obtained, it can be confirmed that the basic sofrito composed of tomato, onion, garlic and olive oil and the sofrito with 1 g of rosemary added, are the ones with the highest content of phenols presenting greater antioxidant power than other industrial sofrito, and that of other variables of sofrito with added thyme or higher amounts of garlic. Moreover, it has been observed that in the elaboration of the tomato-based sofrito, it is possible to cook until 60 minutes, since the cooking process increases the bioavailability of the carotenoids when breaking the cell walls, which weakens the binding forces between the carotenoids and increases the levels of antioxidants present, confirmed both with the TPC and DPPH methods. It can be concluded that the cooking process of different variants of tomato-based sofrito, including spices, can improve the antioxidant capacity. The synergistic effects of different antioxidants may have a greater protective effect; increasing, also, the digestibility of proteins. In addition, the antioxidants help to deactivate the free radicals of diseases such as atherosclerosis, aging, immune suppression, cancer, and diabetes.

Keywords: antioxidants, cooking process, phenols sofrito

Procedia PDF Downloads 117
43 Assessing the Impact of Low Carbon Technology Integration on Electricity Distribution Networks: Advancing towards Local Area Energy Planning

Authors: Javier Sandoval Bustamante, Pardis Sheikhzadeh, Vijayanarasimha Hindupur Pakka

Abstract:

In the pursuit of achieving net-zero carbon emissions, the integration of low carbon technologies into electricity distribution networks is paramount. This paper delves into the critical assessment of how the integration of low carbon technologies, such as heat pumps, electric vehicle chargers, and photovoltaic systems, impacts the infrastructure and operation of electricity distribution networks. The study employs rigorous methodologies, including power flow analysis and headroom analysis, to evaluate the feasibility and implications of integrating these technologies into existing distribution systems. Furthermore, the research utilizes Local Area Energy Planning (LAEP) methodologies to guide local authorities and distribution network operators in formulating effective plans to meet regional and national decarbonization objectives. Geospatial analysis techniques, coupled with building physics and electric energy systems modeling, are employed to develop geographic datasets aimed at informing the deployment of low carbon technologies at the local level. Drawing upon insights from the Local Energy Net Zero Accelerator (LENZA) project, a comprehensive case study illustrates the practical application of these methodologies in assessing the rollout potential of LCTs. The findings not only shed light on the technical feasibility of integrating low carbon technologies but also provide valuable insights into the broader transition towards a sustainable and electrified energy future. This paper contributes to the advancement of knowledge in power electrical engineering by providing empirical evidence and methodologies to support the integration of low carbon technologies into electricity distribution networks. The insights gained are instrumental for policymakers, utility companies, and stakeholders involved in navigating the complex challenges of energy transition and achieving long-term sustainability goals.

Keywords: energy planning, energy systems, digital twins, power flow analysis, headroom analysis

Procedia PDF Downloads 21
42 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 191
41 Use of Shipping Containers as Office Buildings in Brazil: Thermal and Energy Performance for Different Constructive Options and Climate Zones

Authors: Lucas Caldas, Pablo Paulse, Karla Hora

Abstract:

Shipping containers are present in different Brazilian cities, firstly used for transportation purposes, but which become waste materials and an environmental burden in their end-of-life cycle. In the last decade, in Brazil, some buildings made partly or totally from shipping containers started to appear, most of them for commercial and office uses. Although the use of a reused container for buildings seems a sustainable solution, it is very important to measure the thermal and energy aspects when they are used as such. In this context, this study aims to evaluate the thermal and energy performance of an office building totally made from a 12-meter-long, High Cube 40’ shipping container in different Brazilian Bioclimatic Zones. Four different constructive solutions, mostly used in Brazil were chosen: (1) container without any covering; (2) with internally insulated drywall; (3) with external fiber cement boards; (4) with both drywall and fiber cement boards. For this, the DesignBuilder with EnergyPlus was used for the computational simulation in 8760 hours. The EnergyPlus Weather File (EPW) data of six Brazilian capital cities were considered: Curitiba, Sao Paulo, Brasilia, Campo Grande, Teresina and Rio de Janeiro. Air conditioning appliance (split) was adopted for the conditioned area and the cooling setpoint was fixed at 25°C. The coefficient of performance (CoP) of air conditioning equipment was set as 3.3. Three kinds of solar absorptances were verified: 0.3, 0.6 and 0.9 of exterior layer. The building in Teresina presented the highest level of energy consumption, while the one in Curitiba presented the lowest, with a wide range of differences in results. The constructive option of external fiber cement and drywall presented the best results, although the differences were not significant compared to the solution using just drywall. The choice of absorptance showed a great impact in energy consumption, mainly compared to the case of containers without any covering and for use in the hottest cities: Teresina, Rio de Janeiro, and Campo Grande. This study brings as the main contribution the discussion of constructive aspects for design guidelines for more energy-efficient container buildings, considering local climate differences, and helps the dissemination of this cleaner constructive practice in the Brazilian building sector.

Keywords: bioclimatic zones, Brazil, shipping containers, thermal and energy performance

Procedia PDF Downloads 145
40 Measuring the Effect of a Music Therapy Intervention in a Neonatal Intensive Care Unit in Spain

Authors: Pablo González Álvarez, Anna Vinaixa Vergés, Paula Sol Ventura, Paula Fernández, Mercè Redorta, Gemma Ginovart Galiana, Maria Méndez Hernández

Abstract:

Context: The use of music therapy is gaining popularity worldwide, and it has shown positive effects in neonatology. Hospital Germans Trias i Pujol has recently established a music therapy unit and initiated a project in their neonatal intensive care unit (NICU). Research Aim: The aim of this study is to measure the effect of a music therapy intervention in the NICU of Hospital Germans Trias i Pujol in Spain. Methodology: The study will be an observational analytical case-control study. All newborns admitted to the neonatology unit, both term and preterm, and their parents will be offered a session of music therapy. Data will be collected from families who receive at least two music therapy sessions. Maternal and paternal anxiety levels will be measured through a pre- and post-intervention test. Findings: The study aims to demonstrate the benefits and acceptance of music therapy by patients, parents, and healthcare workers in the neonatal unit. The findings are expected to show a reduction in maternal and paternal anxiety levels following the music therapy sessions. Theoretical Importance: This study contributes to the growing body of literature on the effectiveness of music therapy in neonatal care. It will provide evidence of the acceptance and potential benefits of music therapy in reducing anxiety levels in both parents and babies in the NICU setting. Data Collection: Data will be collected from families who receive at least two music therapy sessions. This will include pre- and post-intervention test results to measure anxiety levels. Analysis Procedures: The collected data will be analyzed using appropriate statistical methods to determine the impact of music therapy on reducing anxiety levels in parents. Questions Addressed: - What is the effect of music therapy on maternal anxiety levels? - What is the effect of music therapy on paternal anxiety levels? - What is the acceptability and perceived benefits of music therapy among patients and healthcare workers in the NICU? Conclusion: The study aims to provide evidence supporting the value of music therapy in the neonatal intensive care unit. It seeks to demonstrate the positive effect of music therapy on reducing anxiety levels among parents.

Keywords: neonatology, music therapy, neonatal intensive care unit, babies, parents

Procedia PDF Downloads 28
39 Characteristics of Acute Bacterial Prostatitis in Elderly Patients Attended in the Emergency Department

Authors: Carles Ferré, Ferran Llopis, Javier Jacob, Jordi Giol, Xavier Palom, Ignasi Bardés

Abstract:

Objective: To analyze the characteristics of acute bacterial prostatitis (ABP) in elderly patients attended in the emergency department (ED). Methods: Observational and cohort study with prospective follow-up including patients with ABP presenting to the ED from January-December 2012. Data were collected for demographic variables, comorbidities, clinical and microbiological findings, treatment, outcome, and reconsultation at 30 days follow up. Findings were compared between patients ≥ 75 years (study group) and < 75 years (control group). Results: During the study period 241 episodes of ABP were included for analysis. Mean age was 62,9 ± 16 years, and 64 (26.5%) were ≥ 75 years old. A history of prostate adenoma was reported in 54 cases (22,4%), diabetes mellitus in 47 patients (19,5%) and prior manipulation of the lower urinary tract in 40 (17%). Mean symptoms duration was 3.38 ± 4.04 days, voiding symptoms were present in 176 cases (73%) and fever in 154 (64%). From 216 urine cultures, 128 were positive (59%) and 24 (17,6%) out of 136 blood cultures. Escherichia coli was the main pathogen in 58.6% of urine cultures and 64% of blood cultures (with resistant strains to fluoroquinolones in 27,7%, cotrimoxazole in 22,9% and amoxicillin/clavulanic in 27.7% of cases). Seventy patients (29%) were admitted to the hospital, and 3 died. At 30-day follow-up, 29 patients (12%) returned to the ED. In the bivariate analysis previous manipulation of the urinary tract, history of cancer, previous antibiotic treatment, resistant E. coli strains to amoxicillin-clavulanate and ciprofloxacin and extended spectrum beta-lactamase (ESBL) producers, renal impairment, and admission to the hospital were significantly more frequent (p < 0.05) among patients ≥ 75 years compared to those younger than 75 years. Conclusions: Ciprofloxacin and amoxicillin-clavulanate appear not to be good options for the empiric treatment of ABP for patients ≥ 75 years given the drug-resistance pattern in our series, and the proportion of ESBL-producing strains of E. coli should be taken into account. Awaiting bacteria identification and antibiogram from urine and/or blood cultures, treatment on an inpatient basis should be considered in older patients with ABP.

Keywords: acute bacterial prostatitits, antibiotic resistance, elderly patients, emergency

Procedia PDF Downloads 348
38 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 92
37 Left Atrial Appendage Occlusion vs Oral Anticoagulants in Atrial Fibrillation and Coronary Stenting. The DESAFIO Registry

Authors: José Ramón López-Mínguez, Estrella Suárez-Corchuelo, Sergio López-Tejero, Luis Nombela-Franco, Xavier Freixa-Rofastes, Guillermo Bastos-Fernández, Xavier Millán-Álvarez, Raúl Moreno-Gómez, José Antonio Fernández-Díaz, Ignacio Amat-Santos, Tomás Benito-González, Fernando Alfonso-Manterola, Pablo Salinas-Sanguino, Pedro Cepas-Guillén, Dabit Arzamendi, Ignacio Cruz-González, Juan Manuel Nogales-Asensio

Abstract:

Background and objectives: The treatment of patients with non-valvular atrial fibrillation (NVAF) who need coronary stenting is challenging. The objective of the study was to determine whether left atrial appendage occlusion (LAAO) could be a feasible option and benefit these patients. To this end, we studied the impact of LAAO plus antiplatelet drugs vs oral anticoagulants (OAC) (including direct OAC) plus antiplatelet drugs in these patients’ long-term outcomes. Methods: The results of 207 consecutive patients with NVAF who underwent coronary stenting were analyzed. A total of 146 patients were treated with OAC (75 with acenocoumarol, 71 with direct OAC) while 61 underwent LAAO. The median follow-up was 35 months. Patients also received antiplatelet therapy as prescribed by their cardiologist. The study received the proper ethical oversight. Results: Age (mean 75.7 years), and the past medical history of stroke were similar in both groups. However, the LAAO group had more unfavorable characteristics (history of coronary artery disease [CHA2DS2-VASc], and significant bleeding [BARC ≥ 2] and HAS-BLED). The occurrence of major adverse events (death, stroke/transient ischemic events, major bleeding) and major cardiovascular events (cardiac death, stroke/transient ischemic attack, and myocardial infarction) were significantly higher in the OAC group compared to the LAAO group: 19.75% vs 9.06% (HR, 2.18; P = .008) and 6.37% vs 1.91% (HR, 3.34; P = .037), respectively. Conclusions: In patients with NVAF undergoing coronary stenting, LAAO plus antiplatelet therapy produced better long-term outcomes compared to treatment with OAC plus antiplatelet therapy despite the unfavorable baseline characteristics of the LAAO group.

Keywords: stents, atrial fibrillation, anticoagulants, left atrial appendage occlusion

Procedia PDF Downloads 36
36 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 105
35 Impact of the Dog-Technic for D1-D4 and Longitudinal Stroke Technique for Diaphragm on Peak Expiratory Flow (PEF) in Asthmatic Patients

Authors: Victoria Eugenia Garnacho-Garnacho, Elena Sonsoles Rodriguez-Lopez, Raquel Delgado-Delgado, Alvaro Otero-Campos, Jesus Guodemar-Perez, Angelo Michelle Vagali, Juan Pablo Hervas-Perez

Abstract:

Asthma is a heterogeneous disease which has always had a drug treatment. Osteopathic treatment that we propose is aimed, seen through a dorsal manipulation (Dog Technic D1-D4) and a technique for diaphragm (Longitudinal Stroke) forced expiratory flow in spirometry changes there are in particular that there is an increase in the volumes of the Peak Flow and Post intervention and effort and that the application of these two techniques together is more powerful if we applied only a Longitudinal (Stroke). Also rating if this type of treatment will have repercussions on breathlessness, a very common symptom in asthma. And finally to investigate if provided vertebra pain decreased after a manipulation. Methods—Participants were recruited between students and professors of the University, aged 18-65, patients (n = 18) were assigned randomly to one of the two groups, group 1 (longitudinal Stroke and manipulation dorsal Dog Technic) and group 2 (diaphragmatic technique, Longitudinal Stroke). The statistical analysis is characterized by the comparison of the main indicator of obstruction of via area PEF (peak expiratory flow) in various situations through the peak flow meter Datospir Peak-10. The measurements were carried out in four phases: at rest, after the stress test, after the treatment, after treatment and the stress test. After each stress test was evaluated, through the Borg scale, the level of Dyspnea on each patient, regardless of the group. In Group 1 in addition to these parameters was calculated using an algometer spinous pain before and after the manipulation. All data were taken at the minute. Results—12 Group 1 (Dog Technic and Longitudinal Stroke) patients responded positively to treatment, there was an increase of 5.1% and 6.1% of the post-treatment PEF and post-treatment, and effort. The results of the scale of Borg by which we measure the level of Dyspnea were positive, a 54.95%, patients noted an improvement in breathing. In addition was confirmed through the means of both groups group 1 in which two techniques were applied was 34.05% more effective than group 2 in which applied only a. After handling pain fell by 38% of the cases. Conclusions—The impact of the technique of Dog-Technic for D1-D4 and the Longitudinal Stroke technique for diaphragm in the volumes of peak expiratory flow (PEF) in asthmatic patients were positive, there was a change of the PEF Post intervention and post-treatment, and effort and showed the most effective group in which only a technique was applied. Furthermore this type of treatment decreased facilitated vertebrae pain and was efficient in the improvement of Dyspnea and the general well-being of the patient.

Keywords: ANS, asthma, manipulation, manual therapy, osteopathic

Procedia PDF Downloads 267
34 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 180
33 Effects of an Envious Experience on Schadenfreude and Economic Decisions Making

Authors: Pablo Reyes, Vanessa Riveros Fiallo, Cesar Acevedo, Camila Castellanos, Catalina Moncaleano, Maria F. Parra, Laura Colmenares

Abstract:

Social emotions are physiological, cognitive and behavioral phenomenon that intervene in the mechanisms of adaptation of individuals and their context. These are mediated by interpersonal relationship and language. Such emotions are subdivided into moral and comparison. The present research emphasizes two comparative emotions: Envy and Schadenfreude. Envy arises when a person lack of quality, possessions or achievements and these are superior in someone else. The Schadenfreude (SC) expresses the pleasure that someone experienced by the misfortune of the other. The relationship between both emotions has been questioned before. Hence there are reports showing that envy increases and modulates SC response. Other documents suggest that envy causes SC response. However, the methodological approach of the topic has been made through self-reports, as well as the hypothetical scenarios. Given this problematic, the neuroscience social framework provides an alternative and demonstrates that social emotions have neurophysiological correlates that can be measured. This is relevant when studying social emotions that are reprehensible like envy or SC are. When tested, the individuals tend to report low ratings due to social desirability. In this study, it was drawn up a proposal in research's protocol and the progress on its own piloting. The aim is to evaluate the effect of feeling envy and Schadenfreude has on the decision-making process, as well as the cooperative behavior in an economic game. To such a degree, it was proposed an experimental model that will provoke to feel envious by performing games against an unknown opponent. The game consists of asking general knowledge questions. The difficulty level in questions and the strangers' facial response have been manipulated in order to generate an ecological comparison framework and be able to arise both envy and SC emotions. During the game, an electromyography registry will be made for two facial muscles that have been associated with the expressiveness of envy and SC emotions. One of the innovations of the current proposal is the measurement of the effect that emotions have on a specific behavior. To that extent, it was evaluated the effect of each condition on the dictators' economic game. The main intention is to evaluate if a social emotion can modulate actions that have been associated with social norms, in the literacy. The result of the evaluation of a pilot model (without electromyography record and self-report) have shown an association between envy and SC, in a way that as the individuals report a greater sense of envy, the greater the chance to experience SC. The results of the economic game show a slight tendency towards profit maximization decisions. It is expected that at the time of using real cash this behavior will be strengthened and also to correlate with the responses of electromyography.

Keywords: envy, schadenfreude, electromyography, economic games

Procedia PDF Downloads 349
32 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 179
31 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities

Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores

Abstract:

The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.

Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design

Procedia PDF Downloads 40
30 Envy and Schadenfreude Domains in a Model of Neurodegeneration

Authors: Hernando Santamaría-García, Sandra Báez, Pablo Reyes, José Santamaría-García, Diana Matallana, Adolfo García, Agustín Ibañez

Abstract:

The study of moral emotions (i.e., Schadenfreude and envy) is critical to understand the ecological complexity of everyday interactions between cognitive, affective, and social cognition processes. Most previous studies in this area have used correlational imaging techniques and framed Schadenfreude and envy as monolithic domains. Here, we profit from a relevant neurodegeneration model to disentangle the brain regions engaged in three dimensions of Schadenfreude and envy: deservingness, morality, and legality. We tested 20 patients with behavioral variant frontotemporal dementia (bvFTD), 24 patients with Alzheimer’s disease (AD), as a contrastive neurodegeneration model, and 20 healthy controls on a novel task highlighting each of these dimensions in scenarios eliciting Schadenfreude and envy. Compared with the AD and control groups, bvFTD patients obtained significantly higher scores on all dimensions for both emotions. Interestingly, the legal dimension for both envy and Schadenfreude elicited higher emotional scores than the deservingness and moral dimensions. Furthermore, correlational analyses in bvFTD showed that higher envy and Schadenfreude scores were associated with greater deficits in social cognition, inhibitory control, and behavior. Brain anatomy findings (restricted to bvFTD and controls) confirmed differences in how these groups process each dimension. Schadenfreude was associated with the ventral striatum in all subjects. Also, in bvFTD patients, increased Schadenfreude across dimensions was negatively correlated with regions supporting social-value rewards, mentalizing, and social cognition (frontal pole, temporal pole, angular gyrus and precuneus). In all subjects, all dimensions of envy positively correlated with the volume of the anterior cingulate cortex, a region involved in processing unfair social comparisons. By contrast, in bvFTD patients, the intensified experience of envy across all dimensions was negatively correlated with a set of areas subserving social cognition, including the prefrontal cortex, the parahippocampus, and the amygdala. Together, the present results provide the first lesion-based evidence for the multidimensional nature of the emotional experiences of envy and Schadenfreude. Moreover, this is the first demonstration of a selective exacerbation of envy and Schadenfreude in bvFTD patients, probably triggered by atrophy to social cognition networks. Our results offer new insights into the mechanisms subserving complex emotions and moral cognition in neurodegeneration, paving the way for groundbreaking research on their interaction with other cognitive, social, and emotional processes.

Keywords: social cognition, moral emotions, neuroimaging, frontotemporal dementia

Procedia PDF Downloads 252
29 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico

Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano

Abstract:

In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.

Keywords: accidents, grade crossing, railroad, traffic safety

Procedia PDF Downloads 88
28 A Theragnostic Approach for Alzheimer’s Disease Focused on Phosphorylated Tau

Authors: Tomás Sobrino, Lara García-Varela, Marta Aramburu-Núñez, Mónica Castro, Noemí Gómez-Lado, Mariña Rodríguez-Arrizabalaga, Antía Custodia, Juan Manuel Pías-Peleteiro, José Manuel Aldrey, Daniel Romaus-Sanjurjo, Ángeles Almeida, Pablo Aguiar, Alberto Ouro

Abstract:

Introduction: Alzheimer’s disease (AD) and other tauopathies are primary causes of dementia, causing progressive cognitive deterioration that entails serious repercussions for the patients' performance of daily tasks. Currently, there is no effective approach for the early diagnosis and treatment of AD and tauopathies. This study suggests a theragnostic approach based on the importance of phosphorylated tau protein (p-Tau) in the early pathophysiological processes of AD. We have developed a novel theragnostic monoclonal antibody (mAb) to provide both diagnostic and therapeutic effects. Methods/Results: We have developed a p-Tau mAb, which was doped with deferoxamine for radiolabeling with Zirconium-89 (89Zr) for PET imaging, as well as fluorescence dies for immunofluorescence assays. The p-Tau mAb was evaluated in vitro for toxicity by MTT assay, LDH activity, propidium iodide/Annexin V assay, caspase-3, and mitochondrial membrane potential (MMP) assay in both mouse endothelial cell line (bEnd.3) and cortical primary neurons cell cultures. Importantly, non-toxic effects (up to concentrations of p-Tau mAb greater than 100 ug/mL) were detected. In vivo experiments in the tauopathy model mice (PS19) show that the 89Zr-pTau-mAb and 89Zr-Fragments-pTau-mAb are stable in circulation for up to 10 days without toxic effects. However, only less than 0.2% reached the brain, so further strategies have to be designed for crossing the Brain-Blood-Barrier (BBB). Moreover, an intraparenchymal treatment strategy was carried out. The PS19 mice were operated to implement osmotic pumps (Alzet 1004) at two different times, at 4 and 7 months, to stimulate the controlled release for one month each of the B6 antibody or the IgG1 control antibody. We demonstrated that B6-treated mice maintained their motor and memory abilities significantly compared with IgG1 treatment. In addition, we observed a significant reduction in p-Tau deposits in the brain. Conclusions /Discussion: A theragnostic pTau-mAb was developed. Moreover, we demonstrated that our p-Tau mAb recognizes very-early pathology forms of p-Tau by non-invasive techniques, such as PET. In addition, p-Tau mAb has non-toxic effects, both in vitro and in vivo. Although the p-Tau mAb is stable in circulation, only 0.2% achieve the brain. However, direct intraventricular treatment significantly reduces cognitive impairment in Alzheimer's animal models, as well as the accumulation of toxic p-Tau species.

Keywords: alzheimer's disease, theragnosis, tau, PET, immunotherapy, tauopathies

Procedia PDF Downloads 45
27 CertifHy: Developing a European Framework for the Generation of Guarantees of Origin for Green Hydrogen

Authors: Frederic Barth, Wouter Vanhoudt, Marc Londo, Jaap C. Jansen, Karine Veum, Javier Castro, Klaus Nürnberger, Matthias Altmann

Abstract:

Hydrogen is expected to play a key role in the transition towards a low-carbon economy, especially within the transport sector, the energy sector and the (petro)chemical industry sector. However, the production and use of hydrogen only make sense if the production and transportation are carried out with minimal impact on natural resources, and if greenhouse gas emissions are reduced in comparison to conventional hydrogen or conventional fuels. The CertifHy project, supported by a wide range of key European industry leaders (gas companies, chemical industry, energy utilities, green hydrogen technology developers and automobile manufacturers, as well as other leading industrial players) therefore aims to: 1. Define a widely acceptable definition of green hydrogen. 2. Determine how a robust Guarantee of Origin (GoO) scheme for green hydrogen should be designed and implemented throughout the EU. It is divided into the following work packages (WPs). 1. Generic market outlook for green hydrogen: Evidence of existing industrial markets and the potential development of new energy related markets for green hydrogen in the EU, overview of the segments and their future trends, drivers and market outlook (WP1). 2. Definition of “green” hydrogen: step-by-step consultation approach leading to a consensus on the definition of green hydrogen within the EU (WP2). 3. Review of existing platforms and interactions between existing GoO and green hydrogen: Lessons learnt and mapping of interactions (WP3). 4. Definition of a framework of guarantees of origin for “green” hydrogen: Technical specifications, rules and obligations for the GoO, impact analysis (WP4). 5. Roadmap for the implementation of an EU-wide GoO scheme for green hydrogen: the project implementation plan will be presented to the FCH JU and the European Commission as the key outcome of the project and shared with stakeholders before finalisation (WP5 and 6). Definition of Green Hydrogen: CertifHy Green hydrogen is hydrogen from renewable sources that is also CertifHy Low-GHG-emissions hydrogen. Hydrogen from renewable sources is hydrogen belonging to the share of production equal to the share of renewable energy sources (as defined in the EU RES directive) in energy consumption for hydrogen production, excluding ancillary functions. CertifHy Low-GHG hydrogen is hydrogen with emissions lower than the defined CertifHy Low-GHG-emissions threshold, i.e. 36.4 gCO2eq/MJ, produced in a plant where the average emissions intensity of the non-CertifHy Low-GHG hydrogen production (based on an LCA approach), since sign-up or in the past 12 months, does not exceed the emissions intensity of the benchmark process (SMR of natural gas), i.e. 91.0 gCO2eq/MJ.

Keywords: green hydrogen, cross-cutting, guarantee of origin, certificate, DG energy, bankability

Procedia PDF Downloads 469
26 Preliminary Design, Production and Characterization of a Coral and Alginate Composite for Bone Engineering

Authors: Sthephanie A. Colmenares, Fabio A. Rojas, Pablo A. Arbeláez, Johann F. Osma, Diana Narvaez

Abstract:

The loss of functional tissue is a ubiquitous and expensive health care problem, with very limited treatment options for these patients. The golden standard for large bone damage is a cadaveric bone as an allograft with stainless steel support; however, this solution only applies to bones with simple morphologies (long bones), has a limited material supply and presents long term problems regarding mechanical strength, integration, differentiation and induction of native bone tissue. Therefore, the fabrication of a scaffold with biological, physical and chemical properties similar to the human bone with a fabrication method for morphology manipulation is the focus of this investigation. Towards this goal, an alginate and coral matrix was created using two production techniques; the coral was chosen because of its chemical composition and the alginate due to its compatibility and mechanical properties. In order to construct the coral alginate scaffold the following methodology was employed; cleaning of the coral, its pulverization, scaffold fabrication and finally the mechanical and biological characterization. The experimental design had: mill method and proportion of alginate and coral, as the two factors, with two and three levels each, using 5 replicates. The coral was cleaned with sodium hypochlorite and hydrogen peroxide in an ultrasonic bath. Then, it was milled with both a horizontal and a ball mill in order to evaluate the morphology of the particles obtained. After this, using a combination of alginate and coral powder and water as a binder, scaffolds of 1cm3 were printed with a SpectrumTM Z510 3D printer. This resulted in solid cubes that were resistant to small compression stress. Then, using a ESQUIM DP-143 silicon mold, constructs used for the mechanical and biological assays were made. An INSTRON 2267® was implemented for the compression tests; the density and porosity were calculated with an analytical balance and the biological tests were performed using cell cultures with VERO fibroblast, and Scanning Electron Microscope (SEM) as visualization tool. The Young’s moduli were dependent of the pulverization method, the proportion of coral and alginate and the interaction between these factors. The maximum value was 5,4MPa for the 50/50 proportion of alginate and horizontally milled coral. The biological assay showed more extracellular matrix in the scaffolds consisting of more alginate and less coral. The density and porosity were proportional to the amount of coral in the powder mix. These results showed that this composite has potential as a biomaterial, but its behavior is elastic with a small Young’s Modulus, which leads to the conclusion that the application may not be for long bones but for tissues similar to cartilage.

Keywords: alginate, biomaterial, bone engineering, coral, Porites asteroids, SEM

Procedia PDF Downloads 237
25 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate

Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim

Abstract:

Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.

Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic

Procedia PDF Downloads 613
24 Dendroremediation of a Defunct Lead Acid Battery Recycling Site

Authors: Alejandro Ruiz-Olivares, M. del Carmen González-Chávez, Rogelio Carrillo-González, Martha Reyes-Ramos, Javier Suárez Espinosa

Abstract:

Use of automobiles has increased and proportionally, the demand for batteries to impulse them. When the device is aged, all the battery materials are reused through lead acid battery recycling (LABR). Importation of used lead acid batteries in Mexico has increased in the last years since many recycling factories have been settled in the country. Inadequate disposal of lead-acid battery recycling (LABR) wastes left soil severely polluted with Pb, Cu, and salts (Na+, SO2− 4, PO3− 4). Soil organic amendments may contribute with essential nutrients and sequester (scavenger compounds) metals to allow plant establishment. The objective of this research was to revegetate a former lead-acid battery recycling site aided with organic amendments. Seven tree species (Acacia farnesiana, Casuarina equisetifolia, Cupressus lusitanica, Eucalyptus obliqua, Fraxinus excelsior, Prosopis laevigata and Pinus greggii) and two organic amendments (vermicompost and vermicompost + sawdust mixture) were tested for phytoremediation of a defunct LABR site. Plants were irrigated during the dry season. Monitoring of the soils was carried out during the experiment: Available metals, salts concentrations and their spatial pattern in soil were analyzed. Plant species and amendments were compared through analysis of covariance and longitudinal analysis. High concentrations of extractable (DTPA-TEA-CaCl₂) metals (up to 15,685 mg kg⁻¹ and 478 mg kg⁻¹ for Pb and Cu) and soluble salts (292 mg kg-1 and 23,578 mg kg-1 for PO3− 4and SO2− 4) were found in the soil after three and six months of setting up the experiment. Lead and Cu concentrations were depleted in the rhizosphere after amendments addition. Spatial pattern of PO3− 4, SO2− 4 and DTPA-extractable Pb and Cu changed slightly through time. In spite of extreme soil conditions the plant species planted: A. farnesiana, E. obliqua, C. equisetifolia and F. excelsior had 100% of survival. Available metals and salts differently affected each species. In addition, negative effect on growth due to Pb accumulated in shoots was observed only in C. lusitanica. Many specimens accumulated high concentrations of Pb ( > 1000 mg kg-1) in shoots. C. equisetifolia and C. lusitanica had the best rate of growth. Based on the results, all the evaluated species may be useful for revegetation of Pb-polluted soils. Besides their use in phytoremediation, some ecosystem services can be obtained from the woodland such as encourage wildlife, wood production, and carbon sequestration. Further research should be conducted to analyze these services.

Keywords: heavy metals, inadequate disposal, organic amendments, phytoremediation with trees

Procedia PDF Downloads 262
23 Investigating the Key Success Factors of Supplier Collaboration Governance in the Aerospace Industry

Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz

Abstract:

In the industrial sector collaboration with suppliers is key to the development of innovations in the field of processes. Access to resources and expertise that are not available in the business, obtaining a cost advantage, or the reduction of the time needed to carry out innovation are some of the benefits associated with the process. However, the success of this collaborative process is compromised, when from the beginning not clearly rules have been established that govern the relationship. Abundant studies developed in the field of innovation emphasize the strategic importance of the concept of “Governance”. Despite this, there have been few papers that have analyzed how the governance process of the relationship must be designed and managed to ensure the success of the collaboration process. The lack of literature in this area responds to the wide diversity of contexts where collaborative processes to innovate take place. Thus, in sectors such as the car industry there is a strong collaborative tradition between manufacturers and suppliers being part of the value chain. In this case, it is common to establish mechanisms and procedures that fix formal and clear objectives to regulate the relationship, and establishes the rights and obligations of each of the parties involved. By contrast, in other sectors, collaborative relationships to innovate are not a common way of working, particularly when their aim is the development of process improvements. It is in this case, it is when the lack of mechanisms to establish and regulate the behavior of those involved, can give rise to conflicts, and the failure of the cooperative relationship. Because of this the present paper analyzes the similarities and differences in the processes of governance in collaboration with suppliers in the European aerospace industry With these ideas in mind, we present research is twofold: Understand the importance of governance as a key element of the success of the collaboration in the development of product and process innovations, Establish the mechanisms and procedures to ensure the proper management of the processes of collaboration. Following the methodology of the case study, we analyze the way in which manufacturers and suppliers cooperate in the development of new products and processes in two industries with different levels of technological intensity and collaborative tradition: the automotive and aerospace. The identification of those elements playing a key role to establish a successful governance and relationship management and the compression of the mechanisms of regulation and control in place at the automotive sector can be use to propose solutions to some of the conflicts that currently arise in aerospace industry. The paper concludes by analyzing the strategic implications for the aerospace industry entails the adoption of some of the practices traditionally used in other industrial sectors. Finally, it is important to highlight that in this paper are presented the first results of a research project currently in progress describing a model of governance that explains the way to manage outsourced services to suppliers in the European aerospace industry, through the analysis of companies in the sector located in Germany, France and Spain.

Keywords: supplier collaboration, supplier relationship governance, innovation management, product innovation, process innovation

Procedia PDF Downloads 436
22 Investigating the Governance of Engineering Services in the Aerospace and Automotive Industries

Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz

Abstract:

In the industrial sector collaboration with suppliers is key to the development of innovations in the field of processes. Access to resources and expertise that are not available in the business, obtaining a cost advantage, or the reduction of the time needed to carry out innovation are some of the benefits associated with the process. However, the success of this collaborative process is compromised, when from the beginning not clearly rules have been established that govern the relationship. Abundant studies developed in the field of innovation emphasize the strategic importance of the concept of “Goverance”. Despite this, there have been few papers that have analyzed how the governance process of the relationship must be designed and managed to ensure the success of the cooperation process. The lack of literature in this area responds to the wide diversity of contexts where collaborative processes to innovate take place. Thus, in sectors such as the car industry there is a strong collaborative tradition between manufacturers and suppliers being part of the value chain. In this case, it is common to establish mechanisms and procedures that fix formal and clear objectives to regulate the relationship, and establishes the rights and obligations of each of the parties involved. By contrast, in other sectors, collaborative relationships to innovate are not a common way of working, particularly when their aim is the development of process improvements. It is in this case, it is when the lack of mechanisms to establish and regulate the behavior of those involved, can give rise to conflicts, and the failure of the cooperative relationship. Because of this the present paper analyzes the similarities and differences in the processes of governance in collaboration with service providers in engineering R & D in the European aerospace industry. With these ideas in mind, we present research is twofold: - Understand the importance of governance as a key element of the success of the cooperation in the development of process innovations, - Establish the mechanisms and procedures to ensure the proper management of the processes of cooperation. Following the methodology of the case study, we analyze the way in which manufacturers and suppliers cooperate in the development of new processes in two industries with different levels of technological intensity and collaborative tradition: the automotive and aerospace. The identification of those elements playing a key role to establish a successful governance and relationship management and the compression of the mechanisms of regulation and control in place at the automotive sector can be use to propose solutions to some of the conflicts that currently arise in aerospace industry. The paper concludes by analyzing the strategic implications for the aerospace industry entails the adoption of some of the practices traditionally used in other industrial sectors. Finally, it is important to highlight that in this paper are presented the first results of a research project currently in progress describing a model of governance that explains the way to manage outsourced engineering services to suppliers in the European aerospace industry, through the analysis of companies in the sector located in Germany, France and Spain.

Keywords: innovation management, innovation governance, managing collaborative innovation, process innovation

Procedia PDF Downloads 280
21 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods

Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla

Abstract:

Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.

Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range

Procedia PDF Downloads 86