Search results for: Lisa Manuel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 263

Search results for: Lisa Manuel

23 Stakeholder Mapping and Requirements Identification for Improving Traceability in the Halal Food Supply Chain

Authors: Laila A. H. F. Dashti, Tom Jackson, Andrew West, Lisa Jackson

Abstract:

Traceability systems are important in the agri-food and halal food sectors for monitoring ingredient movements, tracking sources, and ensuring food integrity. However, designing a traceability system for the halal food supply chain is challenging due to diverse stakeholder requirements and complex needs. Existing literature on stakeholder mapping and identifying requirements for halal food supply chains is limited. To address this gap, a pilot study was conducted to identify the objectives, requirements, and recommendations of stakeholders in the Kuwaiti halal food industry. The study collected data through semi-structured interviews with an international halal food manufacturer based in Kuwait. The aim was to gain a deep understanding of stakeholders' objectives, requirements, processes, and concerns related to the design of a traceability system in the country's halal food sector. Traceability systems are being developed and tested in the agri-food and halal food sectors due to their ability to monitor ingredient movements, track sources, and detect potential issues related to food integrity. Designing a traceability system for the halal food supply chain poses significant challenges due to diverse stakeholder requirements and the complexity of their needs (including varying food ingredients, different sources, destinations, supplier processes, certifications, etc.). Achieving a halal food traceability solution tailored to stakeholders' requirements within the supply chain necessitates prior knowledge of these needs. Although attempts have been made to address design-related issues in traceability systems, literature on stakeholder mapping and identification of requirements specific to halal food supply chains is scarce. Thus, this pilot study aims to identify the objectives, requirements, and recommendations of stakeholders in the halal food industry. The paper presents insights gained from the pilot study, which utilized semi-structured interviews to collect data from a Kuwait-based international halal food manufacturer. The objective was to gain an in-depth understanding of stakeholders' objectives, requirements, processes, and concerns pertaining to the design of a traceability system in Kuwait's halal food sector. The stakeholder mapping results revealed that government entities, food manufacturers, retailers, and suppliers are key stakeholders in Kuwait's halal food supply chain. Lessons learned from this pilot study regarding requirement capture for traceability systems include the need to streamline communication, focus on communication at each level of the supply chain, leverage innovative technologies to enhance process structuring and operations and reduce halal certification costs. The findings also emphasized the limitations of existing traceability solutions, such as limited cooperation and collaboration among stakeholders, high costs of implementing traceability systems without government support, lack of clarity regarding product routes, and disrupted communication channels between stakeholders. These findings contribute to a broader research program aimed at developing a stakeholder requirements framework that utilizes "business process modelling" to establish a unified model for traceable stakeholder requirements.

Keywords: supply chain, traceability system, halal food, stakeholders’ requirements

Procedia PDF Downloads 78
22 Genetic Diversity of Cord Blood of the National Center of Blood Transfusion, Mexico (NCBT)

Authors: J. Manuel Bello-López, Julieta Rojo-Medina

Abstract:

Introduction: The transplant of Umbilical Cord Blood Units (UCBU) are a therapeutic possibility for patients with oncohaematological disorders, especially in children. In Mexico, 48.5% of oncological diseases in children 1-4 years old are leukemias; whereas in patients 5-14 and 15-24 years old, lymphomas and leukemias represent the second and third cause of death in these groups respectively. Therefore it is necessary to have more registries of UCBU in order to ensure genetic diversity in the country; the above because the search for appropriate a UCBU is increasingly difficult for patients of mixed ethnicity. Objective: To estimate the genetic diversity (polymorphisms) of Human Leucocyte Antigen (HLA) Class I (A, B) and Class II (DRB1) in UCBU cryopreserved for transplant at Cord Blood Bank of the NCBT. Material and Methods: HLA typing of 533 UCBU for transplant was performed from 2003-2012 at the Histocompatibility Laboratory from the Research Department (evaluated by Los Angeles Ca. Immunogenetics Center) of the NCBT. Class I HLA-A, HLA-B and Class II HLA-DRB1 typing was performed using medium resolution Sequence-Specific Primer (SSP). In cases of an ambiguity detected by SSP; Sequence-Specific Oligonucleotide (SSO) method was carried out. A strict analysis of populations genetic parameters were done in 5 representative UCBU populations. Results: 46.5% of UCBU were collected from Mexico City, State of Mexico (30.95%), Puebla (8.06%), Morelos (6.37%) and Veracruz (3.37%). The remaining UCBU (4.75%) are represented by other states. The identified genotypes correspond to Amerindian origins (HLA-A*02, 31; HLA-B*39, 15, 48), Caucasian (HLA-A*02, 68, 01, 30, 31; HLA-B*35, 15, 40, 44, 07 y HLA-DRB1*04, 08, 07, 15, 03, 14), Oriental (HLA-A*02, 30, 01, 31; HLA-B* 35, 39, 15, 40, 44, 07,48 y HLA-DRB1*04, 07,15, 03) and African (HLA-A*30 y HLA-DRB1*03). The genetic distances obtained by Cavalli-Sforza analysis of the five states showed significant genetic differences by comparing genetic frequencies. The shortest genetic distance exists between Mexico City and the state of Puebla (0.0039) and the largest between Veracruz and Morelos (0.0084). In order to identify significant differences between this states, the ANOVA test was performed. This demonstrates that UCBU is significantly different according to their origin (P <0.05). This is shown by the divergence between arms at the Dendogram of Neighbor-Joining. Conclusions: The NCBT provides UCBU in patients with oncohaematological disorders in all the country. There is a group of patients for which not compatible UCBU can be find due to the mixed ethnic origin. For example, the population of northern Mexico is mostly Caucasian. Most of the NCBT donors are of various ethnic origins, predominantly Amerindians and Caucasians; although some ethnic minorities like Oriental, African and pure Indian ethnics are not represented. The NCBT is, therefore, establishing agreements with different states of Mexico to promote the altruistic donation of Umbilical Cord Blood in order to enrich the genetic diversity in its files.

Keywords: cord blood, genetic diversity, human leucocyte antigen, transplant

Procedia PDF Downloads 359
21 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 234
20 Evaluation of Monoterpenes Induction in Ugni molinae Ecotypes Subjected to a Red Grape Caterpillar (Lepidoptera: Arctiidae) Herbivory

Authors: Manuel Chacon-Fuentes, Leonardo Bardehle, Marcelo Lizama, Claudio Reyes, Andres Quiroz

Abstract:

The insect-plant interaction is a complex process in which the plant is able to release chemical signaling that modifies the behavior of insects. Insect herbivory can trigger mechanisms that allow the increase in the production of secondary metabolites that allow coping against the herbivores. Monoterpenes are a kind of secondary metabolites involved in direct defense acting as repellents of herbivorous or even in indirect defense acting as attractants for insect predators. In addition, an increase of the monoterpenes concentration is an effect commonly associated with the herbivory. Hence, plants subjected to damage by herbivory increase the monoterpenes production in comparison to plants without herbivory. In this framework, co-evolutionary aspects play a fundamental role in the adaptation of the herbivorous to their host and in the counter-adaptive strategies of the plants to avoid the herbivorous. In this context, Ugni molinae 'murtilla' is a native shrub from Chile characterized by its antioxidant activity mainly related to the phenolic compounds presents in its fruits. The larval stage of the red grape caterpillar Chilesia rudis Butler (Lepidoptera: Arctiidae) has been reported as an important defoliator of U. molinae. This insect is native from Chile and probably has been involved in a co-evolutionary process with murtilla. Therefore, we hypothesized that herbivory by the red grape caterpillar increases the emission of monoterpenes in Ugni molinae. Ecotypes 19-1 and 22-1 of murtilla were established and maintained at 25° C in the Laboratorio de Química Ecológica at Universidad de La Frontera. Red grape caterpillars of ⁓40 mm were collected near to Temuco (Chile) from grasses, and they were deprived of food for 24 h before performing the assays. Ten caterpillars were placed on the foliage of the ecotypes 19-1 and 22-1 and allowed to feed during 48 h. After this time, caterpillars were removed from the ecotypes and monoterpenes were collected. A glass chamber was used to enclose the ecotypes and a Porapak-Q column was used to trap the monoterpenes. After 24 h of capturing, columns were desorbed with hexane. Then, samples were injected in a gas chromatograph coupled to mass spectrometer and monoterpenes were determined according to the NIST library. All the experiments were performed in triplicate. Results showed that α-pinene, β-phellandrene, limonene, and 1,8 cineole were the main monoterpenes released by murtilla ecotypes. For the ecotype 19-1, the abundance of α-pinene was significantly higher in plants subjected to herbivory (100%) in relation to control plants (54.58%). Moreover, β-phellandrene and 1,8 cineole were observed only in control plants. For ecotype 22-1, there was no significant difference in monoterpenes abundance. In conclusion, the results suggest a trade-off of β-phellandrene and 1,8 cineole in response to herbivory damage by red grape caterpillar generating an increase in α-pinene abundance.

Keywords: Chilesia rudis, gas chromatography, monoterpenes, Ugni molinae

Procedia PDF Downloads 129
19 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru

Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama

Abstract:

There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.

Keywords: water economy, simulation, modeling, integration

Procedia PDF Downloads 127
18 Advancing Early Intervention Strategies for United States Adolescents and Young Adults with Schizophrenia in the Post-COVID-19 Era

Authors: Peggy M. Randon, Lisa Randon

Abstract:

Introduction: The post-COVID-19 era has presented unique challenges for addressing complex mental health issues, particularly due to exacerbated stress, increased social isolation, and disrupted continuity of care. This article outlines relevant health disparities and policy implications within the context of the United States while maintaining international relevance. Methods: A comprehensive literature review (including studies, reports, and policy documents) was conducted to examine concerns related to childhood-onset schizophrenia and the impact on patients and their families. Qualitative and quantitative data were synthesized to provide insights into the complex etiology of schizophrenia, the effects of the pandemic, and the challenges faced by socioeconomically disadvantaged populations. Case studies were employed to illustrate real-world examples and areas requiring policy reform. Results: Early intervention in childhood is crucial for preventing or mitigating the long-term impact of complex psychotic disorders, particularly schizophrenia. A comprehensive understanding of the genetic, environmental, and physiological factors contributing to the development of schizophrenia is essential. The COVID-19 pandemic worsened symptoms and disrupted treatment for many adolescent patients with schizophrenia, emphasizing the need for adaptive interventions and the utilization of virtual platforms. Health disparities, including stigma, financial constraints, and language or cultural barriers, further limit access to care, especially for socioeconomically disadvantaged populations. Policy implications: Current US health policies inadequately support patients with schizophrenia. The limited availability of longitudinal care, insufficient resources for families, and stigmatization represent ongoing policy challenges. Addressing these issues necessitates increased research funding, improved access to affordable treatment plans, and cultural competency training for healthcare providers. Public awareness campaigns are crucial to promote knowledge, awareness, and acceptance of mental health disorders. Conclusion: The unique challenges faced by children and families in the US affected by schizophrenia and other psychotic disorders have yet to be adequately addressed on institutional and systemic levels. The relevance of findings to an international audience is emphasized by examining the complex factors contributing to the onset of psychotic disorders and their global policy implications. The broad impact of the COVID-19 pandemic on mental health underscores the need for adaptive interventions and global responses. Addressing policy challenges, improving access to care, and reducing the stigma associated with mental health disorders are crucial steps toward enhancing the lives of adolescents and young adults with schizophrenia and their family members. The implementation of virtual platforms can help overcome barriers and ensure equitable access to support and resources for all patients, enabling them to lead healthy and fulfilling lives.

Keywords: childhood, schizophrenia, policy, United, States, health, disparities

Procedia PDF Downloads 48
17 Foreseen the Future: Human Factors Integration in European Horizon Projects

Authors: José Manuel Palma, Paula Pereira, Margarida Tomás

Abstract:

Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).

Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0

Procedia PDF Downloads 26
16 Energy Audit and Renovation Scenarios for a Historical Building in Rome: A Pilot Case Towards the Zero Emission Building Goal

Authors: Domenico Palladino, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Silvia Di Turi

Abstract:

The aim to achieve a fully decarbonized building stock by 2050 stands as one of the most challenging issues within the spectrum of energy and climate objectives. Numerous strategies are imperative, particularly emphasizing the reduction and optimization of energy demand. Ensuring the high energy performance of buildings emerges as a top priority, with measures aimed at cutting energy consumptions. Concurrently, it is imperative to decrease greenhouse gas emissions by using renewable energy sources for the on-site energy production, thereby striving for an energy balance leading towards zero-emission buildings. Italy's predominant building stock comprises ancient buildings, many of which hold historical significance and are subject to stringent preservation and conservation regulations. Attaining high levels of energy efficiency and reducing CO2 emissions in such buildings poses a considerable challenge, given their unique characteristics and the imperative to adhere to principles of conservation and restoration. Additionally, conducting a meticulous analysis of these buildings' current state is crucial for accurately quantifying their energy performance and predicting the potential impacts of proposed renovation strategies on energy consumption reduction. Within this framework, the paper presents a pilot case in Rome, outlining a methodological approach for the renovation of historic buildings towards achieving Zero Emission Building (ZEB) objective. The building has a mixed function with offices, a conference hall, and an exposition area. The building envelope is made of historical and precious materials used as cladding which must be preserved. A thorough understanding of the building's current condition serves as a prerequisite for analyzing its energy performance. This involves conducting comprehensive archival research, undertaking on-site diagnostic examinations to characterize the building envelope and its systems, and evaluating actual energy usage data derived from energy bills. Energy simulations and audit are the first step in the analysis with the assessment of the energy performance of the actual current state. Subsequently, different renovation scenarios are proposed, encompassing advanced building techniques, to pinpoint the key actions necessary for improving mechanical systems, automation and control systems, and the integration of renewable energy production. These scenarios entail different levels of renovation, ranging from meeting minimum energy performance goals to achieving the highest possible energy efficiency level. The proposed interventions are meticulously analyzed and compared to ascertain the feasibility of attaining the Zero Emission Building objective. In conclusion, the paper provides valuable insights that can be extrapolated to inform a broader approach towards energy-efficient refurbishment of historical buildings that may have limited potential for renovation in their building envelopes. By adopting a methodical and nuanced approach, it is possible to reconcile the imperative of preserving cultural heritage with the pressing need to transition towards a sustainable, low-carbon future.

Keywords: energy conservation and transition, energy efficiency in historical buildings, buildings energy performance, energy retrofitting, zero emission buildings, energy simulation

Procedia PDF Downloads 23
15 Safety and Maternal Anxiety in Mother's and Baby's Sleep: Cross-sectional Study

Authors: Rayanne Branco Dos Santos Lima, Lorena Pinheiro Barbosa, Kamila Ferreira Lima, Victor Manuel Tegoma Ruiz, Monyka Brito Lima Dos Santos, Maria Wendiane Gueiros Gaspar, Luzia Camila Coelho Ferreira, Leandro Cardozo Dos Santos Brito, Deyse Maria Alves Rocha

Abstract:

Introduction: The lack of regulation of the baby's sleep-wake pattern in the first years of life affects the health of thousands of women. Maternal sleep deprivation can trigger or aggravate psychosomatic problems such as depression, anxiety and stress that can directly influence maternal safety, with consequences for the baby's and mother's sleep. Such conditions can affect the family's quality of life and child development. Objective: To correlate maternal security with maternal state anxiety scores and the mother's and baby's total sleep time. Method: Cross-sectional study carried out with 96 mothers of babies aged 10 to 24 months, accompanied by nursing professionals linked to a Federal University in Northeast Brazil. Study variables were maternal security, maternal state anxiety scores, infant latency and sleep time, and total nocturnal sleep time of mother and infant. Maternal safety was calculated using a four-point Likert scale (1=not at all safe, 2=somewhat safe, 3=very safe, 4=completely safe). Maternal anxiety was measured by State-Trait Anxiety Inventory, state-anxiety subscale whose scores vary from 20 to 80 points, and the higher the score, the higher the anxiety levels. Scores below 33 are considered mild; from 33 to 49, moderate and above 49, high. As for the total nocturnal sleep time, values between 7-9 hours of sleep were considered adequate for mothers, and values between 9-12 hours for the baby, according to the guidelines of the National Sleep Foundation. For the sleep latency time, a time equal to or less than 20 min was considered adequate. It is noteworthy that the latency time and the time of night sleep of the mother and the baby were obtained by the mother's subjective report. To correlate the data, Spearman's correlation was used in the statistical package R version 3.6.3. Results: 96 women and babies participated, aged 22 to 38 years (mean 30.8) and 10 to 24 months (mean 14.7), respectively. The average of maternal security was 2.89 (unsafe); Mean maternal state anxiety scores were 43.75 (moderate anxiety). The babies' average sleep latency time was 39.6 min (>20 min). The mean sleep times of the mother and baby were, respectively, 6h and 42min and 8h and 19min, both less than the recommended nocturnal sleep time. Maternal security was positively correlated with maternal state anxiety scores (rh=266, p=0.009) and negatively correlated with infant sleep latency (rh= -0.30. P=0.003). Baby sleep time was positively correlated with maternal sleep time. (rh 0.46, p<0.001). Conclusion: The more secure the mothers considered themselves, the higher the anxiety scores and the shorter the baby's sleep latency. Also, the longer the baby sleeps, the longer the mother sleeps. Thus, interventions are needed to promote the quality and efficiency of sleep for both mother and baby.

Keywords: sleep, anxiety, infant, mother-child relations

Procedia PDF Downloads 70
14 Climate Change Law and Transnational Corporations

Authors: Manuel Jose Oyson

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.

Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations

Procedia PDF Downloads 325
13 Numerical Modeling of Phase Change Materials Walls under Reunion Island's Tropical Weather

Authors: Lionel Trovalet, Lisa Liu, Dimitri Bigot, Nadia Hammami, Jean-Pierre Habas, Bruno Malet-Damour

Abstract:

The MCP-iBAT1 project is carried out to study the behavior of Phase Change Materials (PCM) integrated in building envelopes in a tropical environment. Through the phase transitions (melting and freezing) of the material, thermal energy can be absorbed or released. This process enables the regulation of indoor temperatures and the improvement of thermal comfort for the occupants. Most of the commercially available PCMs are more suitable to temperate climates than to tropical climates. The case of Reunion Island is noteworthy as there are multiple micro-climates. This leads to our key question: developing one or multiple bio-based PCMs that cover the thermal needs of the different locations of the island. The present paper focuses on the numerical approach to select the PCM properties relevant to tropical areas. Numerical simulations have been carried out with two softwares: EnergyPlusTM and Isolab. The latter has been developed in the laboratory, with the implicit Finite Difference Method, in order to evaluate different physical models. Both are Thermal Dynamic Simulation (TDS) softwares that predict the building’s thermal behavior with one-dimensional heat transfers. The parameters used in this study are the construction’s characteristics (dimensions and materials) and the environment’s description (meteorological data and building surroundings). The building is modeled in accordance with the experimental setup. It is divided into two rooms, cells A and B, with same dimensions. Cell A is the reference, while in cell B, a layer of commercial PCM (Thermo Confort of MCI Technologies) has been applied to the inner surface of the North wall. Sensors are installed in each room to retrieve temperatures, heat flows, and humidity rates. The collected data are used for the comparison with the numerical results. Our strategy is to implement two similar buildings at different altitudes (Saint-Pierre: 70m and Le Tampon: 520m) to measure different temperature ranges. Therefore, we are able to collect data for various seasons during a condensed time period. The following methodology is used to validate the numerical models: calibration of the thermal and PCM models in EnergyPlusTM and Isolab based on experimental measures, then numerical testing with a sensitivity analysis of the parameters to reach the targeted indoor temperatures. The calibration relies on the past ten months’ measures (from September 2020 to June 2021), with a focus on one-week study on November (beginning of summer) when the effect of PCM on inner surface temperatures is more visible. A first simulation with the PCM model of EnergyPlus gave results approaching the measurements with a mean error of 5%. The studied property in this paper is the melting temperature of the PCM. By determining the representative temperature of winter, summer and inter-seasons with past annual’s weather data, it is possible to build a numerical model of multi-layered PCM. Hence, the combined properties of the materials will provide an optimal scenario for the application on PCM in tropical areas. Future works will focus on the development of bio-based PCMs with the selected properties followed by experimental and numerical validation of the materials. 1Materiaux ´ a Changement de Phase, une innovation pour le B ` ati Tropical

Keywords: energyplus, multi-layer of PCM, phase changing materials, tropical area

Procedia PDF Downloads 65
12 Alkaloid Levels in Experimental Lines of Ryegrass in Southtern Chile

Authors: Leonardo Parra, Manuel Chacón-Fuentes, Andrés Quiroz

Abstract:

One of the most important factors in beef and dairy production in the world as well as also in Chile, is related to the correct choice of cultivars or mixtures of forage grasses and legumes to ensure high yields and quality of grassland. However, a great problem is the persistence of the grasses as a result of the action of different hypogeous as epigean pests. The complex insect pests associated with grassland include white grubs (Hylamorpha elegans, Phytoloema herrmanni), blackworm (Dalaca pallens) and Argentine stem weevil (Listronotus bonariensis). In Chile, the principal strategy utilized for controlling this pest is chemical control, through the use of synthetic insecticides, however, underground feeding habits of larval and flight activity of adults makes this uneconomic method. Furthermore, due to problems including environmental degradation, development of resistance and chemical residues, there is a worldwide interest in the use of alternative environmentally friendly pest control methods. In this sense, in recent years there has been an increasing interest in determining the role of endophyte fungi in controlling epigean and hypogeous pest. Endophytes from ryegrass (Lolium perenne), establish a biotrophic relationship with the host, defined as mutualistic symbiosis. The plant-fungi association produces a “cocktail of alkaloids” where peramine is the main toxic substance present in endophyte of ryegrass and responsible for damage reduction of L. bonariensis. In the last decade, few studies have been developed on the effectiveness of new ryegrass cultivars carriers of endophyte in controlling insect pests. Therefore, the aim of this research is to provide knowledge concerning to evaluate the alkaloid content, such as peramine and Lolitrem B, present in new experimental lines of ryegrass and feasible to be used in grasslands of southern Chile. For this, during 2016, ryegrass plants of six experimental lines and two commercial cultivars sown at the Instituto de Investigaciones Agropecuarias Carrillanca (Vilcún, Chile) were collected and subjected to a process of chemical extraction to identify and quantify the presence of peramine and lolitrem B by the technique of liquid chromatography of high resolution (HPLC). The results indicated that the experimental lines EL-1 and EL-3 had high content of peramine (0.25 and 0.43 ppm, respectively) than with lolitrem B (0.061 and 0.19 ppm, respectively). Furthermore, the higher contents of lolitrem B were detected in the EL-4 and commercial cultivar Alto (positive control) with 0.08 and 0.17 ppm, respectively. Peramine and lolitrem B were not detected in the cultivar Jumbo (negative control). These results suggest that EL-3 would have potential as future cultivate because it has high content of peramine, alkaloid responsible for controlling insect pest. However, their current role on the complex insects attacking ryegrass grasslands should be evaluated. The information obtained in this research could be used to improve control strategies against hypogeous and epigean pests of grassland in southern Chile and also to reduce the use of synthetic pesticides.

Keywords: HPLC, Lolitrem B, peramine, pest

Procedia PDF Downloads 214
11 Enzymatic Determination of Limonene in Red Clover Genotypes

Authors: Andrés Quiroz, Emilio Hormazabal, Ana Mutis, Fernando Ortega, Manuel Chacón-Fuentes, Leonardo Parra

Abstract:

Red clover (Trifolium pratense L.) is an important forage species in temperate regions of the world. The main limitation of this species worldwide is a lack of persistence related to the high mortality of plants due to a complex of biotic and abiotic factors, determining a life span of two or three seasons. Because of the importance of red clover in Chile, a red clover breeding program was started at INIA Carillanca Research Center in 1989, with the main objective of improving the survival of plants, forage yield, and persistence. The main selection criteria for selecting new varieties have been based on agronomical parameters and biotic factors. The main biotic factor associated with red clover mortality in Chile is Hylastinus obscurus (Coleoptera: Curculionidae). Both larval and adults feed on the roots, causing weakening and subsequent death of clover plants. Pesticides have not been successful for controlling infestations of this root borer. Therefore, alternative strategies for controlling this pest are a high priority for red clover producers. Currently, the role of semiochemical in the interaction between H. obscurus and red clover plants has been widely studied for our group. Specifically, from the red clover foliage has been identified limonene is eliciting repellency from the root borer. Limonene is generated in the plant from two independent biosynthetic pathways, the mevalonic acid, and deoxyxylulose pathway. Mevalonate pathway enzymes are localized in the cytosol, whereas the deoxyxylulose phosphate pathway enzymes are found in plastids. In summary, limonene can be determinated by enzymatic bioassay using GPP as substrate and by limonene synthase expression. Therefore, the main objective of this work was to study genetic variation of limonene in material provided by INIA´s Red Clover breeding program. Protein extraction was carried out homogenizing 250 mg of leave tissue and suspended in 6 mL of extraction buffer (PEG 1500, PVP-30, 20 mM MgCl2 and antioxidants) and stirred on ice for 20 min. After centrifugation, aliquots of 2.5 mL were desalted on PD-10 columns, resulting in a final volume of 3.5 mL. Protein determination was performed according to Bradford with BSA as a standard. Monoterpene synthase assays were performed with 50 µL of protein extracts transferred into gas-tight 2 mL crimp seal vials after addition of 4 µL MgCl₂ and 41 µL assay buffer. The assay was started by adding 5 µL of a GPP solution. The mixture was incubated for 30 min at 40 °C. Biosynthesized limonene was quantified in a GC equipped with a chiral column and using synthetic R and S-limonene standards. The enzymatic the production of R and S-limonene from different Superqueli-Carillanca genotypes is shown in this work. Preliminary results showed significant differences in limonene content among the genotypes analyzed. These results constitute an important base for selecting genotypes with a high content of this repellent monoterpene towards H. obscurus.

Keywords: head space, limonene enzymatic determination, red clover, Hylastinus obscurus

Procedia PDF Downloads 238
10 Hydrodynamics in Wetlands of Brazilian Savanna: Electrical Tomography and Geoprocessing

Authors: Lucas M. Furlan, Cesar A. Moreira, Jepherson F. Sales, Guilherme T. Bueno, Manuel E. Ferreira, Carla V. S. Coelho, Vania Rosolen

Abstract:

Located in the western part of the State of Minas Gerais, Brazil, the study area consists of a savanna environment, represented by sedimentary plateau and a soil cover composed by lateritic and hydromorphic soils - in the latter, occurring the deferruginization and concentration of high-alumina clays, exploited as refractory material. In the hydromorphic topographic depressions (wetlands) the hydropedogical relationships are little known, but it is observed that in times of rainfall, the depressed region behaves like a natural seasonal reservoir - which suggests that the wetlands on the surface of the plateau are places of recharge of the aquifer. The aquifer recharge areas are extremely important for the sustainable social, economic and environmental development of societies. The understanding of hydrodynamics in relation to the functioning of the ferruginous and hydromorphic lateritic soils system in the savanna environment is a subject rarely explored in the literature, especially its understanding through the joint application of geoprocessing by UAV (Unmanned Aerial Vehicle) and electrical tomography. The objective of this work is to understand the hydrogeological dynamics in a wetland (with an area of 426.064 m²), in the Brazilian savanna,as well as the understanding of the subsurface architecture of hydromorphic depressions in relation to the recharge of aquifers. The wetland was compartmentalized in three different regions, according to the geoprocessing. Hydraulic conductivity studies were performed in each of these three portions. Electrical tomography was performed on 9 lines of 80 meters in length and spaced 10 meters apart (direction N45), and a line with 80 meters perpendicular to all others. With the data, it was possible to generate a 3D cube. The integrated analysis showed that the area behaves like a natural seasonal reservoir in the months of greater precipitation (December – 289mm; January – 277,9mm; February – 213,2mm), because the hydraulic conductivity is very low in all areas. In the aerial images, geotag correction of the images was performed, that is, the correction of the coordinates of the images by means of the corrected coordinates of the Positioning by Precision Point of the Brazilian Institute of Geography and Statistics (IBGE-PPP). Later, the orthomosaic and the digital surface model (DSM) were generated, which with specific geoprocessing generated the volume of water that the wetland can contain - 780,922m³ in total, 265,205m³ in the region with intermediate flooding and 49,140m³ in the central region, where a greater accumulation of water was observed. Through the electrical tomography it was possible to identify that up to the depth of 6 meters the water infiltrates vertically in the central region. From the 8 meters depth, the water encounters a more resistive layer and the infiltration begins to occur horizontally - tending to concentrate the recharge of the aquifer to the northeast and southwest of the wetland. The hydrodynamics of the area is complex and has many challenges in its understanding. The next step is to relate hydrodynamics to the evolution of the landscape, with the enrichment of high-alumina clays, and to propose a management model for the seasonal reservoir.

Keywords: electrical tomography, hydropedology, unmanned aerial vehicle, water resources management

Procedia PDF Downloads 113
9 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 67
8 Hydraulic Headloss in Plastic Drainage Pipes at Full and Partially Full Flow

Authors: Velitchko G. Tzatchkov, Petronilo E. Cortes-Mejia, J. Manuel Rodriguez-Varela, Jesus Figueroa-Vazquez

Abstract:

Hydraulic headloss, expressed by the values of friction factor f and Manning’s coefficient n, is an important parameter in designing drainage pipes. Their values normally are taken from manufacturer recommendations, many times without sufficient experimental support. To our knowledge, currently there is no standard procedure for hydraulically testing such pipes. As a result of research carried out at the Mexican Institute of Water Technology, a laboratory testing procedure was proposed and applied on 6 and 12 inches diameter polyvinyl chloride (PVC) and high-density dual wall polyethylene pipe (HDPE) drainage pipes. While the PVC pipe is characterized by naturally smooth interior and exterior walls, the dual wall HDPE pipe has corrugated exterior wall and, although considered smooth, a slightly wavy interior wall. The pipes were tested at full and partially full pipe flow conditions. The tests for full pipe flow were carried out on a 31.47 m long pipe at flow velocities between 0.11 and 4.61 m/s. Water was supplied by gravity from a 10 m-high tank in some of the tests, and from a 3.20 m-high tank in the rest of the tests. Pressure was measured independently with piezometer readings and pressure transducers. The flow rate was measured by an ultrasonic meter. For the partially full pipe flow the pipe was placed inside an existing 49.63 m long zero slope (horizontal) channel. The flow depth was measured by piezometers located along the pipe, for flow rates between 2.84 and 35.65 L/s, measured by a rectangular weir. The observed flow profiles were then compared to computer generated theoretical gradually varied flow profiles for different Manning’s n values. It was found that Manning’s n, that normally is assumed constant for a given pipe material, is in fact dependent on flow velocity and pipe diameter for full pipe flow, and on flow depth for partially full pipe flow. Contrary to the expected higher values of n and f for the HDPE pipe, virtually the same values were obtained for the smooth interior wall PVC pipe and the slightly wavy interior wall HDPE pipe. The explanation of this fact was found in Henry Morris’ theory for smooth turbulent conduit flow over isolated roughness elements. Following Morris, three categories of the flow regimes are possible in a rough conduit: isolated roughness (or semi smooth turbulent) flow, wake interference (or hyper turbulent) flow, and skimming (or quasi-smooth) flow. Isolated roughness flow is characterized by friction drag turbulence over the wall between the roughness elements, independent vortex generation, and dissipation around each roughness element. In this regime, the wake and vortex generation zones at each element develop and dissipate before attaining the next element. The longitudinal spacing of the roughness elements and their height are important influencing agents. Given the slightly wavy form of the HDPE pipe interior wall, the flow for this type of pipe belongs to this category. Based on that theory, an equation for the hydraulic friction factor was obtained. The obtained coefficient values are going to be used in the Mexican design standards.

Keywords: drainage plastic pipes, hydraulic headloss, hydraulic friction factor, Manning’s n

Procedia PDF Downloads 252
7 Experimental Study of the Behavior of Elongated Non-spherical Particles in Wall-Bounded Turbulent Flows

Authors: Manuel Alejandro Taborda Ceballos, Martin Sommerfeld

Abstract:

Transport phenomena and dispersion of non-spherical particle in turbulent flows are found everywhere in industrial application and processes. Powder handling, pollution control, pneumatic transport, particle separation are just some examples where the particle encountered are not only spherical. These types of multiphase flows are wall bounded and mostly highly turbulent. The particles found in these processes are rarely spherical but may have various shapes (e.g., fibers, and rods). Although research related to the behavior of regular non-spherical particles in turbulent flows has been carried out for many years, it is still necessary to refine models, especially near walls where the interaction fiber-wall changes completely its behavior. Imaging-based experimental studies on dispersed particle-laden flows have been applied for many decades for a detailed experimental analysis. These techniques have the advantages that they provide field information in two or three dimensions, but have a lower temporal resolution compared to point-wise techniques such as PDA (phase-Doppler anemometry) and derivations therefrom. The applied imaging techniques in dispersed two-phase flows are extensions from classical PIV (particle image velocimetry) and PTV (particle tracking velocimetry) and the main emphasis was simultaneous measurement of the velocity fields of both phases. In a similar way, such data should also provide adequate information for validating the proposed models. Available experimental studies on the behavior of non-spherical particles are uncommon and mostly based on planar light-sheet measurements. Especially for elongated non-spherical particles, however, three-dimensional measurements are needed to fully describe their motion and to provide sufficient information for validation of numerical computations. For further providing detailed experimental results allowing a validation of numerical calculations of non-spherical particle dispersion in turbulent flows, a water channel test facility was built around a horizontal closed water channel. Into this horizontal main flow, a small cross-jet laden with fiber-like particles was injected, which was also solely driven by gravity. The dispersion of the fibers was measured by applying imaging techniques based on a LED array for backlighting and high-speed cameras. For obtaining the fluid velocity fields, almost neutrally buoyant tracer was used. The discrimination between tracer and fibers was done based on image size which was also the basis to determine fiber orientation with respect to the inertial coordinate system. The synchronous measurement of fluid velocity and fiber properties also allow the collection of statistics of fiber orientation, velocity fields of tracer and fibers, the angular velocity of the fibers and the orientation between fiber and instantaneous relative velocity. Consequently, an experimental study the behavior of elongated non-spherical particles in wall bounded turbulent flows was achieved. The development of a comprehensive analysis was succeeded, especially near the wall region, where exists hydrodynamic wall interaction effects (e.g., collision or lubrication) and abrupt changes of particle rotational velocity. This allowed us to predict numerically afterwards the behavior of non-spherical particles within the frame of the Euler/Lagrange approach, where the particles are therein treated as “point-particles”.

Keywords: crossflow, non-spherical particles, particle tracking velocimetry, PIV

Procedia PDF Downloads 55
6 Saving Lives from a Laptop: How to Produce a Live Virtual Media Briefing That Will Inform, Educate, and Protect Communities in Crisis

Authors: Cory B. Portner, Julie A. Grauert, Lisa M. Stromme, Shelby D. Anderson, Franji H. Mayes

Abstract:

Introduction: WASHINGTON state in the Pacific Northwest of the United States is internationally known for its technology industry, fisheries, agriculture, and vistas. On January 21, 2020, Washington state also became known as the first state with a confirmed COVID-19 case in the United States, thrusting the state into the international spotlight as the world came to grips with the global threat of this disease presented. Tourism is Washington state’s fourth-largest industry. Tourism to the state generates over 1.8 billion dollars (USD) in local and state tax revenue and employs over 180,000 people. Communicating with residents, stakeholders, and visitors on the status of disease activity, prevention measures, and response updates was vital to stopping the pandemic and increasing compliance and awareness. Significance: In order to communicate vital public health updates, guidance implementation, and safety measures to the public, the Washington State Department of Health established routine live virtual media briefings to reach audiences via social media, internet television, and broadcast television. Through close partnership with regional broadcast news stations and the state public affairs news network, the Washington State Department of Health hosted 95 media briefings from January 2020 through September 2022 and continues to regularly host live virtual media briefings to accommodate the needs of the public and media. Methods: Our methods quickly evolved from hosting briefings in the cement closet of a military base to being able to produce and stream the briefings live from any home-office location. The content was tailored to the hot topic of the day and to the reporter's questions and needs. Virtual media briefings hosted through inexpensive or free platforms online are extremely cost-effective: the only mandatory components are WiFi, a laptop, and a monitor. There is no longer a need for a fancy studio or expensive production software to achieve the goal of communicating credible, reliable information promptly. With minimal investment and a small learning curve, facilitators and panelists are able to host highly produced and engaging media availabilities from their living rooms. Results: The briefings quickly developed a reputation as the best source for local and national journalists to get the latest and most factually accurate information about the pandemic. In the height of the COVID-19 response, 135 unique media outlets logged on to participate in the briefing. The briefings typically featured 4-5 panelists, with as many as 9 experts in attendance to provide information and respond to media questions. Preparation was always a priority: Public Affairs staff for the Washington State Department of Health produced over 170 presenter remarks, including guidance on talking points for 63 expert guest panelists. Implication For Practice: Information is today’s most valuable currency. The ability to disseminate correct information urgently and on a wide scale is the most effective tool in crisis communication. Due to our role as the first state with a confirmed COVID-19 case, we were forced to develop the most accurate and effective way to get life-saving information to the public. The cost-effective, web-based methods we developed can be applied in any crisis to educate and protect communities under threat, ultimately saving lives from a laptop.

Keywords: crisis communications, public relations, media management, news media

Procedia PDF Downloads 151
5 Signature Bridge Design for the Port of Montreal

Authors: Juan Manuel Macia

Abstract:

The Montreal Port Authority (MPA) wanted to build a new road link via Souligny Avenue to increase the fluidity of goods transported by truck in the Viau Street area of Montreal and to mitigate the current traffic problems on Notre-Dame Street. With the purpose of having a better integration and acceptance of this project with the neighboring residential surroundings, this project needed to include an architectural integration, bringing some artistic components to the bridge design along with some landscaping components. The MPA is required primarily to provide direct truck access to Port of Montreal with a direct connection to the future Assomption Boulevard planned by the City of Montreal and, thus, direct access to Souligny Avenue. The MPA also required other key aspects to be considered for the proposal and development of the project, such as the layout of road and rail configurations, the reconstruction of underground structures, the relocation of power lines, the installation of lighting systems, the traffic signage and communication systems improvement, the construction of new access ramps, the pavement reconstruction and a summary assessment of the structural capacity of an existing service tunnel. The identification of the various possible scenarios began by identifying all the constraints related to the numerous infrastructures located in the area of the future link between the port and the future extension of Souligny Avenue, involving interaction with several disciplines and technical specialties. Several viaduct- and tunnel-type geometries were studied to link the port road to the right-of-way north of Notre-Dame Street and to improve traffic flow at the railway corridor. The proposed design took into account the existing access points to Port of Montreal, the built environment of the MPA site, the provincial and municipal rights-of-way, and the future Notre-Dame Street layout planned by the City of Montreal. These considerations required the installation of an engineering structure with a span of over 60 m to free up a corridor for the future urban fabric of Notre-Dame Street. The best option for crossing this span length was identified by the design and construction of a curved bridge over Notre-Dame Street, which is essentially a structure with a deck formed by a reinforced concrete slab on steel box girders with a single span of 63.5m. The foundation units were defined as pier-cap type abutments on drilled shafts to bedrock with rock sockets, with MSE-type walls at the approaches. The configuration of a single-span curved structure posed significant design and construction challenges, considering the major constraints of the project site, a design for durability approach, and the need to guarantee optimum performance over a 75-year service life in accordance with the client's needs and the recommendations and requirements defined by the standards used for the project. These aspects and the need to include architectural and artistic components in this project made it possible to design, build, and integrate a signature infrastructure project with a sustainable approach, from which the MPA, the commuters, and the city of Montreal and its residents will benefit.

Keywords: curved bridge, steel box girder, medium span, simply supported, industrial and urban environment, architectural integration, design for durability

Procedia PDF Downloads 31
4 Biochemical and Antiviral Study of Peptides Isolated from Amaranthus hypochondriacus on Tomato Yellow Leaf Curl Virus Replication

Authors: José Silvestre Mendoza Figueroa, Anders Kvarnheden, Jesús Méndez Lozano, Edgar Antonio Rodríguez Negrete, Manuel Soriano García

Abstract:

Agroindustrial plants such as cereals and pseudo cereals offer a substantial source of biomacromolecules, as they contain large amounts per tissue-gram of proteins, polysaccharides and lipids in comparison with other plants. In particular, Amaranthus hypochondriacus seeds have high levels of proteins in comparison with other cereal and pseudo cereal species, which makes the plant a good source of bioactive molecules such as peptides. Geminiviruses are one principal class of pathogens that causes important economic losses in crops, affecting directly the development and production of the plant. One such virus is the Tomato yellow leaf curl virus (TYLCV), which affects mainly Solanacea family plants such as tomato species. The symptoms of the disease are curling of leaves, chlorosis, dwarfing and floral abortion. The aim of this work was to get peptides derived from enzymatic hydrolysis of globulins and albumins from amaranth seeds with specific recognition of the replication origin in the TYLCV genome, and to test the antiviral activity on host plants with the idea to generate a direct control of this viral infection. Globulins and albumins from amaranth were extracted, the fraction was enzymatically digested with papain, and the aromatic peptides fraction was selected for further purification. Six peptides were tested against the replication origin (OR) using affinity assays, surface resonance plasmon and fluorescent titration, and two of these peptides showed high affinity values to the replication origin of the virus, dissociation constant values were calculated and showed specific interaction between the peptide Ampep1 and the OR. An in vitro replication test of the total TYLCV DNA was performed, in which the peptide AmPep1 was added in different concentrations to the system reaction, which resulted in a decrease of viral DNA synthesis when the peptide concentration increased. Also, we showed that the peptide can decrease the complementary DNA chain of the virus in Nicotiana benthamiana leaves, confirming that the peptide binds to the OR and that its expected mechanism of action is to decrease the replication rate of the viral genome. In an infection assay, N. benthamiana plants were agroinfected with TYLCV-Israel and TYLCV-Guasave. After confirming systemic infection, the peptide was infiltrated in new infected leaves, and the plants treated with the peptide showed a decrease of virus symptoms and viral titer. In order to confirm the antiviral activity in a commercial crop, tomato plants were infected with TYLCV. After confirming systemic infection, plants were infiltrated with peptide solution as above, and the symptom development was monitored 21 days after treatment, showing that tomato plants treated with peptides had lower symptom rates and viral titer. The peptide was also tested against other begomovirus such as Pepper huasteco yellow vein virus (PHYVV-Guasave), showing a decrease of symptoms in N. benthamiana infected plants. The model of direct biochemical control of TYLCV infection shown in this work can be extrapolated to other begomovirus infections, and the methods reported here can be used for design of antiviral agrochemicals for other plant virus infections.

Keywords: agrochemical screening, antiviral, begomovirus, geminivirus, peptides, plasmon, TYLCV

Procedia PDF Downloads 242
3 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 213
2 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 52
1 Non-Thermal Pulsed Plasma Discharge for Contaminants of Emerging Concern Removal in Water

Authors: Davide Palma, Dimitra Papagiannaki, Marco Minella, Manuel Lai, Rita Binetti, Claire Richard

Abstract:

Modern analytical technologies allow us to detect water contaminants at trace and ultra-trace concentrations highlighting how a large number of organic compounds is not efficiently abated by most wastewater treatment facilities relying on biological processes; we usually refer to these micropollutants as contaminants of emerging concern (CECs). The availability of reliable end effective technologies, able to guarantee the high standards of water quality demanded by legislators worldwide, has therefore become a primary need. In this context, water plasma stands out among developing technologies as it is extremely effective in the abatement of numerous classes of pollutants, cost-effective, and environmentally friendly. In this work, a custom-built non-thermal pulsed plasma discharge generator was used to abate the concentration of selected CECs in the water samples. Samples were treated in a 50 mL pyrex reactor using two different types of plasma discharge occurring at the surface of the treated solution or, underwater, working with positive polarity. The distance between the tips of the electrodes determined where the discharge was formed: underwater when the distance was < 2mm, at the water surface when the distance was > 2 mm. Peak voltage was in the 100-130kV range with typical current values of 20-40 A. The duration of the pulse was 500 ns, and the frequency of discharge could be manually set between 5 and 45 Hz. Treatment of 100 µM diclofenac solution in MilliQ water, with a pulse frequency of 17Hz, revealed that surface discharge was more efficient in the degradation of diclofenac that was no longer detectable after 6 minutes of treatment. Over 30 minutes were required to obtain the same results with underwater discharge. These results are justified by the higher rate of H₂O₂ formation (21.80 µmolL⁻¹min⁻¹ for surface discharge against 1.20 µmolL⁻¹min⁻¹ for underwater discharge), larger discharge volume and UV light emission, high rate of ozone and NOx production (up to 800 and 1400 ppb respectively) observed when working with surface discharge. Then, the surface discharge was used for the treatment of the three selected perfluoroalkyl compounds, namely, perfluorooctanoic acid (PFOA), perfluorohexanoic acid (PFHxA), and pefluorooctanesulfonic acid (PFOS) both individually and in mixture, in ultrapure and groundwater matrices with initial concentration of 1 ppb. In both matrices, PFOS exhibited the best degradation reaching complete removal after 30 min of treatment (degradation rate 0.107 min⁻¹ in ultrapure water and 0.0633 min⁻¹ in groundwater), while the degradation rate of PFOA and PFHxA was slower of around 65% and 80%, respectively. Total nitrogen (TN) measurements revealed levels up to 45 mgL⁻¹h⁻¹ in water samples treated with surface discharge, while, in analogous samples treated with underwater discharge, TN increase was 5 to 10 times lower. These results can be explained by the significant NOx concentrations (over 1400 ppb) measured above functioning reactor operating with superficial discharge; rapid NOx hydrolysis led to nitrates accumulation in the solution explaining the observed evolution of TN values. Ionic chromatography measures confirmed that the vast majority of TN was under the form of nitrates. In conclusion, non-thermal pulsed plasma discharge, obtained with a custom-built generator, was proven to effectively degrade diclofenac in water matrices confirming the potential interest of this technology for wastewater treatment. The surface discharge was proven to be more effective in CECs removal due to the high rate of formation of H₂O₂, ozone, reactive radical species, and strong UV light emission. Furthermore, nitrates enriched water obtained after treatment could be an interesting added-value product to be used as fertilizer in agriculture. Acknowledgment: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 765860.

Keywords: CECs removal, nitrogen fixation, non-thermal plasma, water treatment

Procedia PDF Downloads 94