Search results for: Jorge Gonzalez Camus
55 Influence of CO₂ on the Curing of Permeable Concrete
Authors: A. M. Merino-Lechuga, A. González-Caro, D. Suescum-Morales, E. Fernández-Ledesma, J. R. Jiménez, J. M. Fernández-Rodriguez
Abstract:
Since the mid-19th century, the boom in the economy and industry has grown exponentially. This has led to an increase in pollution due to rising Greenhouse Gas (GHG) emissions and the accumulation of waste, leading to an increasingly imminent future scarcity of raw materials and natural resources. Carbon dioxide (CO₂) is one of the primary greenhouse gases, accounting for up to 55% of Greenhouse Gas (GHG) emissions. The manufacturing of construction materials generates approximately 73% of CO₂ emissions, with Portland cement production contributing to 41% of this figure. Hence, there is scientific and social alarm regarding the carbon footprint of construction materials and their influence on climate change. Carbonation of concrete is a natural process whereby CO₂ from the environment penetrates the material, primarily through pores and microcracks. Once inside, carbon dioxide reacts with calcium hydroxide (Ca(OH)2) and/or CSH, yielding calcium carbonates (CaCO3) and silica gel. Consequently, construction materials act as carbon sinks. This research investigated the effect of accelerated carbonation on the physical, mechanical, and chemical properties of two types of non-structural vibrated concrete pavers (conventional and draining) made from natural aggregates and two types of recycled aggregates from construction and demolition waste (CDW). Natural aggregates were replaced by recycled aggregates using a volumetric substitution method, and the CO₂ capture capacity was calculated. Two curing environments were utilized: a carbonation chamber with 5% CO₂ and a standard climatic chamber with atmospheric CO₂ concentration. Additionally, the effect of curing times of 1, 3, 7, 14, and 28 days on concrete properties was analyzed. Accelerated carbonation in-creased the apparent dry density, reduced water-accessible porosity, improved compressive strength, and decreased setting time to achieve greater mechanical strength. The maximum CO₂ capture ratio was achieved with the use of recycled concrete aggregate (52.52 kg/t) in the draining paver. Accelerated carbonation conditions led to a 525% increase in carbon capture compared to curing under atmospheric conditions. Accelerated carbonation of cement-based products containing recycled aggregates from construction and demolition waste is a promising technology for CO₂ capture and utilization, offering a means to mitigate the effects of climate change and promote the new paradigm of circular economy.Keywords: accelerated carbonation, CO₂ curing, CO₂ uptake and construction and demolition waste., circular economy
Procedia PDF Downloads 6554 CO2 Capture in Porous Silica Assisted by Lithium
Authors: Lucero Gonzalez, Salvador Alfaro
Abstract:
Carbon dioxide (CO2) and methane (CH4) are considered as the compounds with higher abundance among the greenhouse gases (CO2, NOx, SOx, CxHx, etc.), due to its higher concentration, this two gases have a greater impact in the environment pollution and provokes global warming. So, recovery, disposal and subsequent reuse, are of great interest, especially from the ecological and health perspective. By one hand, porous inorganic materials are good candidates to capture gases, because these type of materials are higher stability from the point view of thermal, chemical and mechanical under adsorption gas processes. By another hand, during the design and the synthetic preparation of the porous materials is possible add other intrinsic properties (physicochemical and structural) by adding chemical compounds as dopants or using structured directed agents or surfactants to improve the porous structure, the above features allow to have alternative materials for separation, capture and storage of greenhouse gases. In this work, ordered mesoporous materials base silica were prepared using Surfynol as surfactant. The surfactant micelles are commonly used as self-assembly templates for the development of new structure porous silica’s, adding a variety of textures and structures. By another hand, the Surfynol is a commercial surfactant, is non-ionic, for that is necessary determine its critical micelles concentration (cmc) by the pyrene I1/I3 ratio method, before to prepare silica particles. One time known the CMC, a precursor gel was prepared via sol-gel process at room temperature using TEOS as silica precursor, NH4OH as catalyst, Surfynol as template and H2O as solvent. Then, the gel precursor was treatment hydrothermally in a Teflon-lined stainless steel autoclave with a volume of 100 mL and kept at 100 ºC for 24 h under static conditions in a convection oven. After that, the porous silica particles obtained were impregnated with lithium to improve the CO2 adsorption capacity. Then the silica particles were characterized physicochemical, morphology and structurally, by XRD, FTIR, BET and SEM techniques. The thermal stability and the CO2 adsorption capacity was evaluated by thermogravimetric analysis (TGA). According the results, we found that the Surfynol is a good candidate to prepare silica particles with an ordered structure. Also the TGA analysis shown that the particles has a good thermal stability in the range of 250 °C and 800 °C. The best materials had, the capacity to adsorbing 70 and 90 mg per gram of silica particles and its CO2 adsorption capacity depends on the way to thermal pretreatment of the porous silica before of the adsorption experiments and of the concentration of surfactant used during the synthesis of silica particles. Acknowledgments: This work was supported by SIP-IPN through project SIP-20161862.Keywords: CO2 adsorption, lithium as dopant, porous silica, surfynol as surfactant, thermogravimetric analysis
Procedia PDF Downloads 26953 The Genus Bacillus, Effect on Commercial Crops of Colombia
Authors: L. C. Sánchez, L. C. Corrales, A. G. Lancheros, E. Castañeda, Y. Ariza, L. S. Fuentes, L. Sierra, J. L. Cuervo
Abstract:
The importance of environment friendly alternatives in agricultural processes is the reason why the research group Ceparium, the Colegio Mayor de Cundinamarca University, Colombia, investigated the genus Bacillus and its applicability for improving crops of economic importance in Colombia. In this investigation, we presented a study in which the genus Bacillus plays a leading role as beneficial microorganism. The objective was to identify the biochemical potential of three indigenous species of Bacillus, which were able to carry out actions for biological control against pathogens and pests or promoted growth to improve productivity of crops in Colombia. The procedures were performed in three phases: first, the production of biomass of an indigenous strain and a reference strain starting from culture media for production of spores and toxins were made. Spore count was done in a Neubauer chamber, concentrations of spores of Bacillus sphaericus were prepared and a bioassay was done at the Laboratory of Entomology at the University Jorge Tadeo Lozano of Plutella xylostella larvae, insect pest of crucifers in several Colombian regions. The second phase included the extraction in the liquid state fermentation, a secondary metabolite that has antibiosis action against fungi, call iturin B, and was obtained from strains of Bacillus subtilis. The molecule was identified using High Resolution Chromatography (HPLC) and its biocontrol effect on Fusarium sp fungus causes vascular wilt in economically important plant varieties, was confirmed using testing of antagonism in Petri dish. In the third phase, an initial procedure in that let recover and identify microorganisms of the genus Bacillus from the rhizosphere in two aromatic herbs, Rosmarinus officinalis and Thymus vulgaris L. was used. Subsequently, testing of antagonism against Fusarium sp were made and an assay was done under greenhouse conditions to observe biocontrol and growth promoting action by comparing growth in length and dry weight. In the first experiment, native Bacillus sphaericus was lethal to 92% Plutella xylostella larvae in 10 DDA. In the second experiment, iturin B was identified and biological control of Fusarium sp was demonstrated. In the third study, all strains demonstrated biological control and the B14 strain identified as Bacillus megaterium increased root length and productivity of the two plants in terms of weight. It was concluded that the native microorganisms of the genus Bacillus has a great biochemical potential that provides a beneficial interactions with plants, improve their growth and development and therefore a greater impact on production.Keywords: genus bacillus, biological control, PGPRs, biochemical potential
Procedia PDF Downloads 43652 A Bayesian Approach for Health Workforce Planning in Portugal
Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro
Abstract:
Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning
Procedia PDF Downloads 25251 Prospective Service Evaluation of Physical Healthcare In Adult Community Mental Health Services in a UK-Based Mental Health Trust
Authors: Gracie Tredget, Raymond McGrath, Karen Ang, Julie Williams, Nick Sevdalis, Fiona Gaughran, Jorge Aria de la Torre, Ioannis Bakolis, Andy Healey, Zarnie Khadjesari, Euan Sadler, Natalia Stepan
Abstract:
Background: Preventable physical health problems have been found to increase morbidity rates amongst adults living with serious mental illness (SMI). Community mental health clinicians have a role in identifying, and preventing physical health problems worsening, and supporting primary care services to administer routine physical health checks for their patients. However, little is known about how mental health staff perceive and approach their role when providing physical healthcare amongst patients with SMI, or the impact these attitudes have on routine practice. Methods: The present study involves a prospective service evaluation specific to Adult Community Mental Health Services at South London and Maudsley NHS Foundation Trust (SLaM). A qualitative methodology will use semi-structured interviews, focus groups and observations to explore attitudes, perceptions and experiences of staff, patients, and carers (n=64) towards physical healthcare, and barriers or facilitators that impact upon it. 1South London and Maudsley NHS Foundation Trust, London, SE5 8AZ, UK 2 Centre for Implementation Science, King’s College London, London, SE5 8AF, UK 3 Psychosis Studies, King's College London, London, SE5 8AF, UK 4 Department of Biostatistics and Health Informatics, King’s College London, London, SE5 8AF, UK 5 Kings Health Economics, King's College London, London, SE5 8AF, UK 6 Behavioural and Implementation Science (BIS) research group, University of East Anglia, Norwich, UK 7 Department of Nursing, Midwifery and Health, University of Southampton, Southampton, UK 8 Mind and Body Programme, King’s Health Partners, Guy’s Hospital, London, SE1 9RT *[email protected] Analysis: Data from across qualitative tasks will be synthesised using Framework Analysis methodologies. Staff, patients, and carers will be invited to participate in co-development of recommendations that can improve routine physical healthcare within Adult Community Mental Health Teams at SLaM. Results: Data collection is underway at present. At the time of the conference, early findings will be available to discuss. Conclusions: An integrated approach to mind and body care is needed to reduce preventable deaths amongst people with SMI. This evaluation will seek to provide a framework that better equips staff to approach physical healthcare within a mental health setting.Keywords: severe mental illness, physical healthcare, adult community mental health, nursing
Procedia PDF Downloads 9750 The Connection between De Minimis Rule and the Effect on Trade
Authors: Pedro Mario Gonzalez Jimenez
Abstract:
The novelties introduced by the last Notice on agreements of minor importance tighten the application of the ‘De minimis’ safe harbour in the European Union. However, the undetermined legal concept of effect on trade between the Member States becomes importance at the same time. Therefore, the current analysis that the jurist should carry out in the European Union to determine if an agreement appreciably restrict competition under Article 101 of the Treaty on the Functioning of the European Union is double. Hence, it is necessary to know how to balance the significance in competition and the significance in effect on trade between the Member States. It is a crucial issue due to the negative delimitation of restriction of competition affects the positive one. The methodology of this research is rather simple. Beginning with a historical approach to the ‘De Minimis Rule’, their main problems and uncertainties will be found. So, after the analysis of normative documents and the jurisprudence of the Court of Justice of the European Union some proposals of ‘Lege ferenda’ will be offered. These proposals try to overcome the contradictions and questions that currently exist in the European Union as a consequence of the current legal regime of agreements of minor importance. The main findings of this research are the followings: Firstly, the effect on trade is another way to analyze the importance of an agreement different from the ‘De minimis rule’. In point of fact, this concept is singularly adapted to go through agreements that have as object the prevention, restriction or distortion of competition, as it is observed in the most famous European Union case-law. Thanks to the effect on trade, as long as the proper requirements are met there is no a restriction of competition under article 101 of the Treaty on the Functioning of the European Union, even if the agreement had an anti-competitive object. These requirements are an aggregate market share lower than 5% on any of the relevant markets affected by the agreement and turnover lower than 40 million of Euros. Secondly, as the Notice itself says ‘it is also intended to give guidance to the courts and competition authorities of the Member States in their application of Article 101 of the Treaty, but it has no binding force for them’. This reality makes possible the existence of different statements among the different Member States and a confusing perception of what a restriction of competition is. Ultimately, damage on trade between the Member States could be observed for this reason. The main conclusion is that the significant effect on trade between Member States is irrelevant in agreements that restrict competition because of their effects but crucial in agreements that restrict competition because of their object. Thus, the Member States should propose the incorporation of a similar concept in their legal orders in order to apply the content of the Notice. Otherwise, the significance of the restrictive agreement on competition would not be properly assessed.Keywords: De minimis rule, effect on trade, minor importance agreements, safe harbour
Procedia PDF Downloads 18349 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients
Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho
Abstract:
Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper
Procedia PDF Downloads 14748 Automated Building Internal Layout Design Incorporating Post-Earthquake Evacuation Considerations
Authors: Sajjad Hassanpour, Vicente A. González, Yang Zou, Jiamou Liu
Abstract:
Earthquakes pose a significant threat to both structural and non-structural elements in buildings, putting human lives at risk. Effective post-earthquake evacuation is critical for ensuring the safety of building occupants. However, current design practices often neglect the integration of post-earthquake evacuation considerations into the early-stage architectural design process. To address this gap, this paper presents a novel automated internal architectural layout generation tool that optimizes post-earthquake evacuation performance. The tool takes an initial plain floor plan as input, along with specific requirements from the user/architect, such as minimum room dimensions, corridor width, and exit lengths. Based on these inputs, firstly, the tool randomly generates different architectural layouts. Secondly, the human post-earthquake evacuation behaviour will be thoroughly assessed for each generated layout using the advanced Agent-Based Building Earthquake Evacuation Simulation (AB2E2S) model. The AB2E2S prototype is a post-earthquake evacuation simulation tool that incorporates variables related to earthquake intensity, architectural layout, and human factors. It leverages a hierarchical agent-based simulation approach, incorporating reinforcement learning to mimic human behaviour during evacuation. The model evaluates different layout options and provides feedback on evacuation flow, time, and possible casualties due to earthquake non-structural damage. By integrating the AB2E2S model into the automated layout generation tool, architects and designers can obtain optimized architectural layouts that prioritize post-earthquake evacuation performance. Through the use of the tool, architects and designers can explore various design alternatives, considering different minimum room requirements, corridor widths, and exit lengths. This approach ensures that evacuation considerations are embedded in the early stages of the design process. In conclusion, this research presents an innovative automated internal architectural layout generation tool that integrates post-earthquake evacuation simulation. By incorporating evacuation considerations into the early-stage design process, architects and designers can optimize building layouts for improved post-earthquake evacuation performance. This tool empowers professionals to create resilient designs that prioritize the safety of building occupants in the face of seismic events.Keywords: agent-based simulation, automation in design, architectural layout, post-earthquake evacuation behavior
Procedia PDF Downloads 10547 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico
Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba
Abstract:
In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems
Procedia PDF Downloads 13446 Generative Behaviors and Psychological Well-Being in Mexican Elders
Authors: Ana L. Gonzalez-Celis, Edgardo Ruiz-Carrillo, Karina Reyes-Jarquin, Margarita Chavez-Becerra
Abstract:
Since recent decades, the aging has been viewed from a more positive perspective, where is not only about losses and damage, but also about being on a stage where you can enjoy life and live with well-being and quality of life. The challenge to feel better is to find those resources that seniors have. For that reason, psychological well-being has shown interest in the study of the affect and life satisfaction (hedonic well-being), while from a more recent tradition, focus on the development of capabilities and the personal growth, considering both as the main indicators of the quality of life. A resource that can be used in the later age is generativity, which refers to the ability of older people to develop and grow through activities that contribute with the improvement of the context in which they live and participate. In this way the generative interest is understood as a favourable attitude that contribute to the common benefit while strengthening and enriching the social institutions, to ensure continuity between generations and social development. On the other hand, generative behavior, differentiating from generative interest, is the expression of that attitude reflected in activities that make a social contribution and a benefit for generations to come. Hence the purpose of the research was to test if there is an association between the generative behaviour type and the psychological well-being with their dimensions. For this reason 188 Mexican adults from 60 to 94 years old (M = 69.78), 67% women, 33% men, completed two instruments: The Ryff’s Well-Being Scales to measure psychological well-being with 39 items with two dimensions (Hedonic and Eudaimonic well-being), and the Loyola’s Generative Behaviors Scale, grouped in five categories: Knowledge transmitted to the next generation, things to be remember, creativity, be productive, contribution to the community, and responsibility of other people. In addition, the socio-demographic data sheet was tested, and self-reported health status. The results indicated that the psychological well-being and its dimensions were significantly associated with the presence of generative behavior, where the level of well-being was higher when the frequency of some generative behaviour excelled; finding that the behavior with greater psychological well-being (M = 81.04, SD = 8.18) was "things to be remembered"; while with greater hedonic well-being (M = 73.39, SD = 12.19) was the behavior "responsibility of other people"; and with greater Eudaimonic well-being (M = 84.61, SD = 6.63), was the behavior "things to be remembered”. The most important findings highlight the importance of generative behaviors in adulthood, finding empirical evidence that the generativity in the last stage of life is associated with well-being. However, by finding differences in the types of generative behaviors at the level of well-being, is proposed the idea that generativity is not situated as an isolated construct, but needs other contextualized and related constructs that can simultaneously operate at different levels, taking into account the relationship between the environment and the individual, encompassing both the social and psychological dimension.Keywords: eudaimonic well-being, generativity, hedonic well-being, Mexican elders, psychological well-being
Procedia PDF Downloads 27645 Improvement of the Traditional Techniques of Artistic Casting through the Development of Open Source 3D Printing Technologies Based on Digital Ultraviolet Light Processing
Authors: Drago Diaz Aleman, Jose Luis Saorin Perez, Cecile Meier, Itahisa Perez Conesa, Jorge De La Torre Cantero
Abstract:
Traditional manufacturing techniques used in artistic contexts compete with highly productive and efficient industrial procedures. The craft techniques and associated business models tend to disappear under the pressure of the appearance of mass-produced products that compete in all niche markets, including those traditionally reserved for the work of art. The surplus value derived from the prestige of the author, the exclusivity of the product or the mastery of the artist, do not seem to be sufficient reasons to preserve this productive model. In the last years, the adoption of open source digital manufacturing technologies in small art workshops can favor their permanence by assuming great advantages such as easy accessibility, low cost, and free modification, adapting to specific needs of each workshop. It is possible to use pieces modeled by computer and made with FDM (Fused Deposition Modeling) 3D printers that use PLA (polylactic acid) in the procedures of artistic casting. Models printed by PLA are limited to approximate minimum sizes of 3 cm, and optimal layer height resolution is 0.1 mm. Due to these limitations, it is not the most suitable technology for artistic casting processes of smaller pieces. An alternative to solve size limitation, are printers from the type (SLS) "selective sintering by laser". And other possibility is a laser hardens, by layers, metal powder and called DMLS (Direct Metal Laser Sintering). However, due to its high cost, it is a technology that is difficult to introduce in small artistic foundries. The low-cost DLP (Digital Light Processing) type printers can offer high resolutions for a reasonable cost (around 0.02 mm on the Z axis and 0.04 mm on the X and Y axes), and can print models with castable resins that allow the subsequent direct artistic casting in precious metals or their adaptation to processes such as electroforming. In this work, the design of a DLP 3D printer is detailed, using backlit LCD screens with ultraviolet light. Its development is totally "open source" and is proposed as a kit made up of electronic components, based on Arduino and easy to access mechanical components in the market. The CAD files of its components can be manufactured in low-cost FDM 3D printers. The result is less than 500 Euros, high resolution and open-design with free access that allows not only its manufacture but also its improvement. In future works, we intend to carry out different comparative analyzes, which allow us to accurately estimate the print quality, as well as the real cost of the artistic works made with it.Keywords: traditional artistic techniques, DLP 3D printer, artistic casting, electroforming
Procedia PDF Downloads 14244 Effect of 8-OH-DPAT on the Behavioral Indicators of Stress and on the Number of Astrocytes after Exposure to Chronic Stress
Authors: Ivette Gonzalez-Rivera, Diana B. Paz-Trejo, Oscar Galicia-Castillo, David N. Velazquez-Martinez, Hugo Sanchez-Castillo
Abstract:
Prolonged exposure to stress can cause disorders related with dysfunction in the prefrontal cortex such as generalized anxiety and depression. These disorders involve alterations in neurotransmitter systems; the serotonergic system—a target of the drugs that are commonly used as a treatment to these disorders—is one of them. Recent studies suggest that 5-HT1A receptors play a pivotal role in the serotonergic system regulation and in stress responses. In the same way, there is increasing evidence that astrocytes are involved in the pathophysiology of stress. The aim of this study was to examine the effects of 8-OH-DPAT, a selective agonist of 5-HT1A receptors, in the behavioral signs of anxiety and anhedonia as well as in the number of astrocytes in the medial prefrontal cortex (mPFC) after exposure to chronic stress. They used 50 male Wistar rats of 250-350 grams housed in standard laboratory conditions and treated in accordance with the ethical standards of use and care of laboratory animals. A protocol of chronic unpredictable stress was used for 10 consecutive days during which the presentation of stressors such as motion restriction, water deprivation, wet bed, among others, were used. 40 rats were subjected to the stress protocol and then were divided into 4 groups of 10 rats each, which were administered 8-OH-DPAT (Tocris, USA) intraperitoneally with saline as vehicle in doses 0.0, 0.3, 1.0 and 2.0 mg/kg respectively. Another 10 rats were not subjected to the stress protocol or the drug. Subsequently, all the rats were measured in an open field test, a forced swimming test, sucrose consume, and a cero maze test. At the end of this procedure, the animals were sacrificed, the brain was removed and the tissue of the mPFC (Bregma: 4.20, 3.70, 2.70, 2.20) was processed in immunofluorescence staining for astrocytes (Anti-GFAP antibody - astrocyte maker, ABCAM). Statistically significant differences were found in the behavioral tests of all groups, showing that the stress group with saline administration had more indicators of anxiety and anhedonia than the control group and the groups with administration of 8-OH-DPAT. Also, a dose dependent effect of 8-OH-DPAT was found on the number of astrocytes in the mPFC. The results show that 8-OH-DPAT can modulate the effect of stress in both behavioral and anatomical level. Also they indicate that 5-HT1A receptors and astrocytes play an important role in the stress response and may modulate the therapeutic effect of serotonergic drugs, so they should be explored as a fundamental part in the treatment of symptoms of stress and in the understanding of the mechanisms of stress responses.Keywords: anxiety, prefrontal cortex, serotonergic system, stress
Procedia PDF Downloads 32643 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs
Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana
Abstract:
Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs
Procedia PDF Downloads 32542 Exploratory Tests on Structures Resistance during Forest Fires
Authors: Luis M. Ribeiro, Jorge Raposo, Ricardo Oliveira, David Caballero, Domingos X. Viegas
Abstract:
Under the scope of European project WUIWATCH a set of experimental tests on house vulnerability was performed in order to assess the resistance of selected house components during the passage of a forest fire. Among the individual elements most affected by the passage of a wildfire the windows are the ones with greater exposure. In this sense, a set of exploratory experimental tests was designed to assess some particular aspects related to the vulnerability of windows and blinds. At the same time, the importance of leaving them closed (as well as the doors inside a house) during a wild fire was explored in order to give some scientific background to guidelines for homeowners. Three sets of tests were performed: 1. Windows and blinds resistance to heat. Three types of protective blinds were tested (aluminium, PVC and wood) on 2 types of windows (single and double pane). The objective was to assess the structures resistance. 2. The influence of air flow on the transport of burning embers inside a house. A room was built to scale, and placed inside a wind tunnel, with one window and one door on opposite sides. The objective was to assess the importance of leaving an inside door opened on the probability of burning embers entering the room. 3. The influence of the dimension of openings on a window or door related to the probability of ignition inside a house. The objective was to assess the influence of different window openings in relation to the amount of burning particles that can enter a house. The main results were: 1. The purely radiative heat source provides 1.5 KW/m2 of heat impact in the structure, while the real fire generates 10 Kw/m2. When protected by the blind, the single pane window reaches 30ºC on both sides, and the double pane window has a differential of 10º from the side facing the heat (30ºC) and the opposite side (40ºC). Unprotected window constantly increases temperature until the end of the test. Window blinds reach considerably higher temperatures. PVC loses its consistency above 150ºC and melts. 2. Leaving the inside door closed results in a positive pressure differential of +1Pa from the outside to the inside, inhibiting the air flow. Opening the door in half or full reverts the pressure differential to -6 and -8 times respectively, favouring the air flow from the outside to the inside. The number of particles entering the house follows the same tendency. 3. As the bottom opening in a window increases from 0,5 cm to 4 cm the number of particles that enter the house per second also increases greatly. From 5 cm until 80cm there is no substantial increase in the number of entering particles. This set of exploratory tests proved to be an added value in supporting guidelines for home owners, regarding self-protection in WUI areas.Keywords: forest fire, wildland urban interface, house vulnerability, house protective elements
Procedia PDF Downloads 28541 How Consumers Perceive Health and Nutritional Information and How It Affects Their Purchasing Behavior: Comparative Study between Colombia and the Dominican Republic
Authors: Daniel Herrera Gonzalez, Maria Luisa Montas
Abstract:
There are some factors affecting consumer decision-making regarding the use of the front of package labels in order to find benefits to the well-being of the human being. Currently, there are several labels that help influence or change the purchase decision for food products. These labels communicate the impact that food has on human health; therefore, consumers are more critical and intelligent when buying and consuming food products. The research explores the association between front-of-pack labeling and food choice; the association between label content and purchasing decisions is complex and influenced by different factors, including the packaging itself. The main objective of this study was to examine the perception of health labels and nutritional declarations and their influence on buying decisions in the non-alcoholic beverages sector. This comparative study of two developing countries will show how consumers take nutritional labels into account when deciding to buy certain foods. This research applied a quantitative methodology with correlational scope. This study has a correlational approach in order to analyze the degree of association between variables. Likewise, the confirmatory factor analysis (CFA) method and structural equation modeling (SEM) as a powerful multivariate technique was used as statistical technique to find the relationships between observable and unobservable variables. The main findings of this research were the obtaining of three large groups and their perception and effects on nutritional and wellness labels. The first group is characterized by taking an attitude of high interest on the issue of the imposition of the nutritional information label on products and would agree that all products should be packaged given its importance to preventing illnesses in the consumer. Likewise, they almost always care about the brand, the size, the list of ingredients, and nutritional information of the food, and also the effect of these on health. The second group stands out for presenting some interest in the importance of the label on products as a purchase decision, in addition to almost always taking into account the characteristics of size, money, components, etc. of the products to decide on their consumption and almost always They are never interested in the effect of these products on their health or nutrition, and in group 3, it differs from the others by being more neutral regarding the issue of nutritional information labels, and being less interested in the purchase decision and characteristics of the product and also on the influence of these on health and nutrition. This new knowledge is essential for different companies that manufacture and market food products because they will have information to adapt or anticipate the new laws of developing countries as well as the new needs of health-conscious consumers when they buy food products.Keywords: healthy labels, consumer behavior, nutritional information, healthy products
Procedia PDF Downloads 10840 A Complex Network Approach to Structural Inequality of Educational Deprivation
Authors: Harvey Sanchez-Restrepo, Jorge Louca
Abstract:
Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics
Procedia PDF Downloads 12539 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 2838 Working Towards More Sustainable Food Waste: A Circularity Perspective
Authors: Rocío González-Sánchez, Sara Alonso-Muñoz
Abstract:
Food waste implies an inefficient management of the final stages in the food supply chain. Referring to Sustainable Development Goals (SDGs) by United Nations, the SDG 12.3 proposes to halve per capita food waste at the retail and consumer level and to reduce food losses. In the linear system, food waste is disposed and, to a lesser extent, recovery or reused after consumption. With the negative effect on stocks, the current food consumption system is based on ‘produce, take and dispose’ which put huge pressure on raw materials and energy resources. Therefore, greater focus on the circular management of food waste will mitigate the environmental, economic, and social impact, following a Triple Bottom Line (TBL) approach and consequently the SDGs fulfilment. A mixed methodology is used. A total sample of 311 publications from Web of Science database were retrieved. Firstly, it is performed a bibliometric analysis by SciMat and VOSviewer software to visualise scientific maps about co-occurrence analysis of keywords and co-citation analysis of journals. This allows for the understanding of the knowledge structure about this field, and to detect research issues. Secondly, a systematic literature review is conducted regarding the most influential articles in years 2020 and 2021, coinciding with the most representative period under study. Thirdly, to support the development of this field it is proposed an agenda according to the research gaps identified about circular economy and food waste management. Results reveal that the main topics are related to waste valorisation, the application of waste-to-energy circular model and the anaerobic digestion process towards fossil fuels replacement. It is underlined that the use of food as a source of clean energy is receiving greater attention in the literature. There is a lack of studies about stakeholders’ awareness and training. In addition, available data would facilitate the implementation of circular principles for food waste recovery, management, and valorisation. The research agenda suggests that circularity networks with suppliers and customers need to be deepened. Technological tools for the implementation of sustainable business models, and greater emphasis on social aspects through educational campaigns are also required. This paper contributes on the application of circularity to food waste management by abandoning inefficient linear models. Shedding light about trending topics in the field guiding to scholars for future research opportunities.Keywords: bibliometric analysis, circular economy, food waste management, future research lines
Procedia PDF Downloads 11337 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 35436 Evaluating the Effect of 'Terroir' on Volatile Composition of Red Wines
Authors: María Luisa Gonzalez-SanJose, Mihaela Mihnea, Vicente Gomez-Miguel
Abstract:
The zoning methodology currently recommended by the OIVV as official methodology to carry out viticulture zoning studies and to define and delimit the ‘terroirs’ has been applied in this study. This methodology has been successfully applied on the most significant an important Spanish Oenological D.O. regions, such as Ribera de Duero, Rioja, Rueda and Toro, but also it have been applied around the world in Portugal, different countries of South America, and so on. This is a complex methodology that uses edaphoclimatic data but also other corresponding to vineyards and other soils’ uses The methodology is useful to determine Homogeneous Soil Units (HSU) to different scale depending on the interest of each study, and has been applied from viticulture regions to particular vineyards. It seems that this methodology is an appropriate method to delimit correctly the medium in order to enhance its uses and to obtain the best viticulture and oenological products. The present work is focused on the comparison of volatile composition of wines made from grapes grown in different HSU that coexist in a particular viticulture region of Castile-Lion cited near to Burgos. Three different HSU were selected for this study. They represented around of 50% of the global area of vineyards of the studied region. Five different vineyards on each HSU under study were chosen. To reduce variability factors, other criteria were also considered as grape variety, clone, rootstocks, vineyard’s age, training systems and cultural practices. This study was carried out during three consecutive years, then wine from three different vintage were made and analysed. Different red wines were made from grapes harvested in the different vineyards under study. Grapes were harvested to ‘Technological maturity’, which are correlated with adequate levels of sugar, acidity, phenolic content (nowadays named phenolic maturity), good sanitary stages and adequate levels of aroma precursors. Results of the volatile profile of the wines produced from grapes of each HSU showed significant differences among them pointing out a direct effect of the edaphoclimatic characteristic of each UHT on the composition of the grapes and then on the volatile composition of the wines. Variability induced by HSU co-existed with the well-known inter-annual variability correlated mainly with the specific climatic conditions of each vintage, however was most intense, so the wine of each HSU were perfectly differenced. A discriminant analysis allowed to define the volatiles with discriminant capacities which were 21 of the 74 volatiles analysed. Detected discriminant volatiles were chemical different, although .most of them were esters, followed by were superior alcohols and fatty acid of short chain. Only one lactone and two aldehydes were selected as discriminant variable, and no varietal aroma compounds were selected, which agree with the fact that all the wine were made from the same grape variety.Keywords: viticulture zoning, terroir, wine, volatile profile
Procedia PDF Downloads 22135 Effects of Oxidized LDL in M2 Macrophages: Implications in Atherosclerosis
Authors: Fernanda Gonçalves, Karla Alcântara, Vanessa Moura, Patrícia Nolasco, Jorge Kalil, Maristela Hernandez
Abstract:
Introduction: Atherosclerosis is a chronic disease where two striking features are observed: retention of lipids and inflammation. Understanding the interaction between immune cells and lipoproteins involved in atherogenesis are urgent challenges, since cardiovascular diseases are the leading cause of death worldwide. Macrophages are critical to the development of atherosclerotic plaques and in the perpetuation of inflammation in these lesions. These cells are also directly involved in unstable plaque rupture. Recently different populations of macrophages are being identified in atherosclerotic lesions. Although the presence of M2 macrophages (macrophages activated by the alternative pathway, eg. The IL-4) has been identified, the function of these cells in atherosclerosis is not yet defined. M2 macrophages have a high endocytic capacity, they promote remodeling of tissues and to have anti-inflammatory activity. However, in atherosclerosis, especially unstable plaques, severe inflammatory reaction, accumulation of cellular debris and intense degradation of the tissue is observed. Thus, it is possible that the M2 macrophages have altered function (phenotype) in atherosclerosis. Objective: Our aim is to evaluate if the presence of oxidized LDL alters the phenotype and function of M2 macrophages in vitro. Methods: For this, we will evaluate whether the addition of lipoprotein in M2 macrophages differentiated in vitro with IL -4 induces 1) a reduction in the secretion of anti-inflammatory cytokines (CBA and ELISA), 2) secretion of inflammatory cytokines (CBA and ELISA), 3) expression of cell activation markers (Flow cytometry), 4) alteration in gene expression of molecules adhesion and extracellular matrix (Real-Time PCR) and 5) Matrix degradation (confocal microscopy). Results: In oxLDL stimulated M2 macrophages cultures we did not find any differences in the expression of the cell surface markers tested, including: HLA-DR, CD80, CD86, CD206, CD163 and CD36. Also, cultures stimulated with oxLDL had similar phagocytic capacity when compared to unstimulated cells. However, in the supernatant of these cultures an increase in the secretion of the pro-inflammatory cytokine IL-8 was detected. No significant changes where observed in IL-6, IL-10, IL-12 and IL-1b levels. The culture supernatant also induced massive extracellular matrix (produced by mouse embryo fibroblast) filaments degradation. When evaluating the expression of 84 extracellular matrix and adhesion molecules genes, we observed that the stimulation of oxLDL in M2 macrophages decreased 47% of the genes and increased the expression of only 3% of the genes. In particular we noted that oxLDL inhibit the expression of 60% of the genes constituents of extracellular matrix and collagen expressed by these cells, including fibronectin1 and collagen VI. We also observed a decrease in the expression of matrix protease inhibitors, such as TIMP 2. On the opposite, the matricellular protein thrombospondin had a 12 fold increase in gene expression. In the presence of native LDL 90% of the genes had no altered expression. Conclusion: M2 macrophages stimulated with oxLDL secrete the pro-inflammatory cytokine IL-8, have an altered extracellular matrix constituents gene expression, and promote the degradation of extracellular matrix. M2 macrophages may contribute to the perpetuation of inflammation in atherosclerosis and to plaque rupture.Keywords: atherosclerosis, LDL, macrophages, m2
Procedia PDF Downloads 33534 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization
Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo
Abstract:
Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy
Procedia PDF Downloads 14833 Cotton Fabrics Functionalized with Green and Commercial Ag Nanoparticles
Authors: Laura Gonzalez, Santiago Benavides, Martha Elena Londono, Ana Elisa Casas, Adriana Restrepo-Osorio
Abstract:
Cotton products are sensitive to microorganisms due to its ability to retain moisture, which might cause change into the coloration, mechanical properties reduction or foul odor generation; consequently, this represents risks to the health of users. Nowadays, have been carried out researches to give antibacterial properties to textiles using different strategies, which included the use of silver nanoparticles (AgNPs). The antibacterial behavior can be affected by laundering process reducing its effectiveness. In the other way, the environmental impact generated for the synthetic antibacterial agents has motivated to seek new and more ecological ways for produce AgNPs. The aims of this work are to determine the antibacterial activity of cotton fabric functionalized with green (G) and commercial (C) AgNPs after twenty washing cycles, also to evaluate morphological and color changes. A plain weave cotton fabric suitable for dyeing and two AgNPs solutions were use. C a commercial product and G produced using an ecological method, both solutions with 0.5 mM concentration were impregnated on cotton fabric without stabilizer, at a liquor to fabric ratio of 1:20 in constant agitation during 30min and then dried at 70 °C by 10 min. After that the samples were subjected to twenty washing cycles using phosphate-free detergent simulated on agitated flask at 150 rpm, then were centrifuged and dried on a tumble. The samples were characterized using Kirby-Bauer test determine antibacterial activity against E. coli y S. aureus microorganisms, the results were registered by photographs establishing the inhibition halo before and after the washing cycles, the tests were conducted in triplicate. Scanning electron microscope (SEM) was used to observe the morphologies of cotton fabric and treated samples. The color changes of cotton fabrics in relation to the untreated samples were obtained by spectrophotometer analysis. The images, reveals the presence of inhibition halo in the samples treated with C and G AgNPs solutions, even after twenty washing cycles, which indicated a good antibacterial activity and washing durability, with a tendency to better results against to S. aureus bacteria. The presence of AgNPs on the surface of cotton fiber and morphological changes were observed through SEM, after and before washing cycles. The own color of the cotton fiber has been significantly altered with both antibacterial solutions. According to the colorimetric results, the samples treated with C lead to yellowing while the samples modified with G to red yellowing Cotton fabrics treated AgNPs C and G from 0.5 mM solutions exhibited excellent antimicrobial activity against E. coli and S. aureus with good laundering durability effects. The surface of the cotton fibers was modified with the presence of AgNPs C and G due to the presence of NPs and its agglomerates. There are significant changes in the natural color of cotton fabric due to deposition of AgNPs C and G which were maintained after laundering process.Keywords: antibacterial property, cotton fabric, fastness to wash, Kirby-Bauer test, silver nanoparticles
Procedia PDF Downloads 24732 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 3731 Moral Decision-Making in the Criminal Justice System: The Influence of Gruesome Descriptions
Authors: Michel Patiño-Sáenz, Martín Haissiner, Jorge Martínez-Cotrina, Daniel Pastor, Hernando Santamaría-García, Maria-Alejandra Tangarife, Agustin Ibáñez, Sandra Baez
Abstract:
It has been shown that gruesome descriptions of harm can increase the punishment given to a transgressor. This biasing effect is mediated by negative emotions, which are elicited upon the presentation of gruesome descriptions. However, there is a lack of studies inquiring the influence of such descriptions on moral decision-making in people involved in the criminal justice system. Such populations are of special interest since they have experience dealing with gruesome evidence, but also formal education on how to assess evidence and gauge the appropriate punishment according to the law. Likewise, they are expected to be objective and rational when performing their duty, because their decisions can impact profoundly people`s lives. Considering these antecedents, the objective of this study was to explore the influence gruesome written descriptions on moral decision-making in this group of people. To that end, we recruited attorneys, judges and public prosecutors (Criminal justice group, CJ, n=30) whose field of specialty is criminal law. In addition, we included a control group of people who did not have a formal education in law (n=30), but who were paired in age and years of education with the CJ group. All participants completed an online, Spanish-adapted version of a moral decision-making task, which was previously reported in the literature and also standardized and validated in the Latin-American context. A series of text-based stories describing two characters, one inflicting harm on the other, were presented to participants. Transgressor's intentionality (accidental vs. intentional harm) and language (gruesome vs. plain) used to describe harm were manipulated employing a within-subjects and a between-subjects design, respectively. After reading each story, participants were asked to rate (a) the harmful action's moral adequacy, (b) the amount of punishment deserving the transgressor and (c) how damaging was his behavior. Results showed main effects of group, intentionality and type of language on all dependent measures. In both groups, intentional harmful actions were rated as significantly less morally adequate, were punished more severely and were deemed as more damaging. Moreover, control subjects deemed more damaging and punished more severely any type of action than the CJ group. In addition, there was an interaction between intentionality and group. People in the control group rated harmful actions as less morally adequate than the CJ group, but only when the action was accidental. Also, there was an interaction between intentionality and language on punishment ratings. Controls punished more when harm was described using gruesome language. However, that was not the case of people in the CJ group, who assigned the same amount of punishment in both conditions. In conclusion, participants with job experience in the criminal justice system or criminal law differ in the way they make moral decisions. Particularly, it seems that they are less sensitive to the biasing effect of gruesome evidence, which is probably explained by their formal education or their experience in dealing with such evidence. Nonetheless, more studies are needed to determine the impact this phenomenon has on the fulfillment of their duty.Keywords: criminal justice system, emotions, gruesome descriptions, intentionality, moral decision-making
Procedia PDF Downloads 19030 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors
Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy
Abstract:
The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.Keywords: hyperkalemia, drift, AACVD, organosilicon
Procedia PDF Downloads 12429 A Strategy to Reduce Salt Intake: The Use of a Seasoning Obtained from Wine Pomace
Authors: María Luisa Gonzalez-SanJose, Javier Garcia-Lomillo, Raquel Del Pino, Miriam Ortega-Heras, Maria Dolores Rivero-Perez, Pilar Muñiz-Rodriguez
Abstract:
One of the most preoccupant problems related to the diet of the occidental societies is the high salt intake. In Spain, salt intake is almost twice as recommended by the World Health Organization (WHO). A lot of negative health effects of high sodium intake have been described being the hypertension, cardiovascular and coronary diseases ones of the most important. Due to this fact, government and other institutions are working on the gradual reduction of this consumption. Intake of meat products have been described as the main processed products that bring salt to the diet, followed by snacks and savory crackers. However, fortunately, the food industry has also raised awareness of this problem and is working intensely, and in recent years attempts to reduce the salt content in processed products, and is developing special lines with low sodium content. It is important to consider that processed food are the main source of sodium in occidental countries. One of the possible strategies to reduce the salt content in food is to find substitutes that can emulate their taste properties without adding much sodium or products that mask or substitute salty sensations with other flavors and aromas. In this sense, multiple products have been proposed and used until now. Potassium salts produce similar salty sensations without bring sodium, however their intake should be also limited, by healthy reasons. Furthermore, some potassium salts shows some better notes. Other alternatives are the use of flavor enhancers, spices, aromatic herbs, sea-plant derivate products, etc. The wine pomace is rich in potassium salts, content organic acid and other flavored substances, therefore it could be an interesting raw material to obtain derived products that could be useful as alternative ‘seasonings’. Considering previous comments, the main aim of this study was to evaluate the possible use of a natural seasoning, made from red wine pomace, in two different foods, crackers and burgers. The seasoning was made in the pilot plant of food technology of the University of Burgos, where the studied crackers and patties were also made. Different members of the University, students, docent and administrative personal, taste the products, and a trained panel evaluated salty intensity. The seasoning in addition to potassium contain significant levels of dietary fiber and phenolic compounds, which also makes it interesting as a functional ingredient. Both burgers and crackers made with the seasoning showed better taste that those without salt. Obviously, they showed lower sodium content than normal formulation, and were richer in potassium, antioxidant and fiber. Then, they showed lower values of the relation Na/K. All these facts are correlated with more ‘healthy’ products especially to that people with hypertension and other coronary dysfunctions.Keywords: healthy foods, low salt, seasoning, wine pomace
Procedia PDF Downloads 27528 The Language of COVID-19: Psychological Effects of the Label 'Essential Worker' on Spanish-Speaking Adults
Authors: Natalia Alvarado, Myldred Hernandez-Gonzalez, Mary Laird, Madeline Phillips, Elizabeth Miller, Luis Mendez, Teresa Satterfield Linares
Abstract:
Objectives: Focusing on the reported levels of depressive symptoms from Hispanic individuals in the U.S. during the ongoing COVID-19 pandemic, we analyze the psychological effects of being labeled an ‘essential worker/trabajador(a) esencial.’ We situate this attribute within the complex context of how an individual’s mental health is linked to work status and his/her community’s attitude toward such a status. Method: 336 Spanish-speaking adults (Mage = 34.90; SD = 11.00; 46% female) living in the U.S. participated in a mixed-method study. Participants completed a self-report Spanish-language survey consisting of COVID-19 prompts (e.g., Soy un trabajador esencial durante la pandemia. I am an ‘essential worker’ during the pandemic), civic engagement scale (CES) attitudes (e.g., Me siento responsable de mi comunidad. I feel responsible for my community) and behaviors (e.g., Ayudo a los miembros de mi comunidad. I help members of my community), and the Center for Epidemiological Studies Depression Scale (e.g., Me sentía deprimido/a. I felt depressed). The survey was conducted several months into the pandemic and before the vaccine distribution. Results: Regression analyses show that being labeled an essential worker was correlated to CES attitudes (b= .28, p < .001) and higher CES behaviors (b= .32, p < .001). Essential worker status also reported higher levels of depressive symptoms (b= .17, p < .05). In addition, we found that CES attitudes and CES behaviors were related to higher levels of depressive symptoms (b= .11, p <.05, b = .22, p < .001, respectively). These findings suggest that those who are on the frontlines during the COVID-19 pandemic suffer higher levels of depressive symptoms, despite their affirming community attitudes and behaviors. Discussion: Hispanics/Latinxs make up 53% of the high-proximity employees who must work in person and in close contact with others; this is the highest rate of any racial or ethnic category. Moreover, 31% of Hispanics are classified as essential workers. Our outcomes show that those labeled as trabajadores esenciales convey attitudes of remaining strong and resilient for COVID-19 victims. They also express community attitudes and behaviors reflecting a sense of responsibility to continue working to help others during these unprecedented times. However, we also find that the pressure of maintaining basic needs for others exacerbates mental health challenges and stressors, as many essential workers are anxious and stressed about their physical and economic security. As a result, community attitudes do not protect from depressive symptoms as Hispanic essential workers are failing to balance everyone’s needs, including their own (e.g., physical exhaustion and psychological distress). We conclude with a discussion on alternatives to the phrase ‘essential worker’ and of incremental steps that can be taken to address pandemic-related mental health issues targeting US Hispanic workers.Keywords: COVID-19, essential worker, mental health, race and ethnicity
Procedia PDF Downloads 12927 The Combined Use of L-Arginine and Progesterone During the Post-breeding Period in Female Rabbits Increases the Weight of Their Fetuses
Authors: Diego F. Carrillo-González, Milena Osorio, Natalia M. Cerro, Yasser Y. Lenis
Abstract:
Introduction: mortality during the implantation and early embryonic development periods reach around 30% in different mammalian species. It has been described that progesterone (P4) and Arginine (Arg) play a beneficial role in establishing and maintaining early pregnancy in mammals. The combined effect between Arg and P4 on reproductive parameters in the rabbit species is not yet elucidated, to our best knowledge. Objective: to assess the effect of L-arginine and progesterone during the post-breeding period in female rabbits on the composition of the amniotic fluid, the placental structure, and the bone growth in their fetuses. Methods: crossbred female rabbits (n=16) were randomly distributed into four experimental groups (Ctrl, Arg, P4, and Arg+P4). In the control group, 0.9% saline solution was administered as a placebo, the Arg group was administered arginine (50 mg/kg BW) from day 4.5 to day 19 post-breeding, the P4 group was administered progesterone (Gestavec®, 1.5 mg/kg BW) from 24 hours to day 4 post-breeding and for the Arg+P4 group, an administration was performed under the same time and dose guidelines as the Arg and P4 treatments. Four females were sacrificed, and the amniotic fluid was collected and analyzed with rapid urine test strips, while the placenta and fetuses were processed in the laboratory to obtain histological plates. The percentage of deciduous, labyrinthine, and junctional zones was determined, and the length of the femur for each fetus was measured as an indicator of growth. Descriptive statistics were applied to identify the success rates for each of the tests. Afterwards, A one-way analysis of variance (ANOVA) was performed, and a comparison of means was conducted by Tukey's test. Results: a higher density (p<0.05) was observed in the amniotic fluid for fetuses in the control group (1022±2.5g/mL) compared to the P4 (1015±5.3g/mL) and Arg+P4 (1016±4,9g/mL) groups. Additionally, the density of amniotic fluid in the Arg group (1021±2.5g/mL) was higher (p<0.05) than in the P4 group. The concentration of protein, glucose, and ascorbic acid had no statistical difference between treatments (p>0.05). The histological analysis of the uteroplacental regions, a statistical difference (p<0,05) in the proportion of deciduous zone was found between the P4 group (9.6±2.6%) when compared with the Ctrl (28.15±12.3%), and Arg+P4 (26.3±4.9) groups. In the analysis of the fetuses, the weight was higher for the Arg group (2.69±0.18), compared to the other groups (p<0.05), while a shorter length was observed (p<0.05) in the fetuses for the Arg+P4 group (25.97±1.17). However, no difference (p>0.05) was found when comparing the length of the developing femurs between the experimental groups. Conclusion: the combination of L-arginine and progesterone allows a reduction in the density of amniotic fluid, without affecting the protein, energy, and antioxidant components. However, the use of L-arginine stimulates weight gain in fetuses, without affecting size, which could be used to improve production parameters in rabbit production systems. In addition, the modification in the deciduous zone could show a placental adaptation based on the fetal growth process, however more specific studies on the placentation process are required.Keywords: arginine, progesterone, rabbits, reproduction
Procedia PDF Downloads 9126 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells
Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez
Abstract:
Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation
Procedia PDF Downloads 251