Search results for: soil water content
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14825

Search results for: soil water content

305 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂

Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang

Abstract:

CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.

Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces

Procedia PDF Downloads 255
304 Complete Genome Sequence Analysis of Pasteurella multocida Subspecies multocida Serotype A Strain PMTB2.1

Authors: Shagufta Jabeen, Faez J. Firdaus Abdullah, Zunita Zakaria, Nurulfiza M. Isa, Yung C. Tan, Wai Y. Yee, Abdul R. Omar

Abstract:

Pasteurella multocida (PM) is an important veterinary opportunistic pathogen particularly associated with septicemic pasteurellosis, pneumonic pasteurellosis and hemorrhagic septicemia in cattle and buffaloes. P. multocida serotype A has been reported to cause fatal pneumonia and septicemia. Pasteurella multocida subspecies multocida of serotype A Malaysian isolate PMTB2.1 was first isolated from buffaloes died of septicemia. In this study, the genome of P. multocida strain PMTB2.1 was sequenced using third-generation sequencing technology, PacBio RS2 system and analyzed bioinformatically via de novo analysis followed by in-depth analysis based on comparative genomics. Bioinformatics analysis based on de novo assembly of PacBio raw reads generated 3 contigs followed by gap filling of aligned contigs with PCR sequencing, generated a single contiguous circular chromosome with a genomic size of 2,315,138 bp and a GC content of approximately 40.32% (Accession number CP007205). The PMTB2.1 genome comprised of 2,176 protein-coding sequences, 6 rRNA operons and 56 tRNA and 4 ncRNAs sequences. The comparative genome sequence analysis of PMTB2.1 with nine complete genomes which include Actinobacillus pleuropneumoniae, Haemophilus parasuis, Escherichia coli and five P. multocida complete genome sequences including, PM70, PM36950, PMHN06, PM3480, PMHB01 and PMTB2.1 was carried out based on OrthoMCL analysis and Venn diagram. The analysis showed that 282 CDs (13%) are unique to PMTB2.1and 1,125 CDs with orthologs in all. This reflects overall close relationship of these bacteria and supports the classification in the Gamma subdivision of the Proteobacteria. In addition, genomic distance analysis among all nine genomes indicated that PMTB2.1 is closely related with other five Pasteurella species with genomic distance less than 0.13. Synteny analysis shows subtle differences in genetic structures among different P.multocida indicating the dynamics of frequent gene transfer events among different P. multocida strains. However, PM3480 and PM70 exhibited exceptionally large structural variation since they were swine and chicken isolates. Furthermore, genomic structure of PMTB2.1 is more resembling that of PM36950 with a genomic size difference of approximately 34,380 kb (smaller than PM36950) and strain-specific Integrative and Conjugative Elements (ICE) which was found only in PM36950 is absent in PMTB2.1. Meanwhile, two intact prophages sequences of approximately 62 kb were found to be present only in PMTB2.1. One of phage is similar to transposable phage SfMu. The phylogenomic tree was constructed and rooted with E. coli, A. pleuropneumoniae and H. parasuis based on OrthoMCL analysis. The genomes of P. multocida strain PMTB2.1 were clustered with bovine isolates of P. multocida strain PM36950 and PMHB01 and were separated from avian isolate PM70 and swine isolates PM3480 and PMHN06 and are distant from Actinobacillus and Haemophilus. Previous studies based on Single Nucleotide Polymorphism (SNPs) and Multilocus Sequence Typing (MLST) unable to show a clear phylogenetic relatedness between Pasteurella multocida and the different host. In conclusion, this study has provided insight on the genomic structure of PMTB2.1 in terms of potential genes that can function as virulence factors for future study in elucidating the mechanisms behind the ability of the bacteria in causing diseases in susceptible animals.

Keywords: comparative genomics, DNA sequencing, phage, phylogenomics

Procedia PDF Downloads 168
303 Digital Adoption of Sales Support Tools for Farmers: A Technology Organization Environment Framework Analysis

Authors: Sylvie Michel, François Cocula

Abstract:

Digital agriculture is an approach that exploits information and communication technologies. These encompass data acquisition tools like mobile applications, satellites, sensors, connected devices, and smartphones. Additionally, it involves transfer and storage technologies such as 3G/4G coverage, low-bandwidth terrestrial or satellite networks, and cloud-based systems. Furthermore, embedded or remote processing technologies, including drones and robots for process automation, along with high-speed communication networks accessible through supercomputers, are integral components of this approach. While farm-level adoption studies regarding digital agricultural technologies have emerged in recent years, they remain relatively limited in comparison to other agricultural practices. To bridge this gap, this study delves into understanding farmers' intention to adopt digital tools, employing the technology, organization, environment framework. A qualitative research design encompassed semi-structured interviews, totaling fifteen in number, conducted with key stakeholders both prior to and following the 2020-2021 COVID-19 lockdowns in France. Subsequently, the interview transcripts underwent thorough thematic content analysis, and the data and verbatim were triangulated for validation. A coding process aimed to systematically organize the data, ensuring an orderly and structured classification. Our research extends its contribution by delineating sub-dimensions within each primary dimension. A total of nine sub-dimensions were identified, categorized as follows: perceived usefulness for communication, perceived usefulness for productivity, and perceived ease of use constitute the first dimension; technological resources, financial resources, and human capabilities constitute the second dimension, while market pressure, institutional pressure, and the COVID-19 situation constitute the third dimension. Furthermore, this analysis enriches the TOE framework by incorporating entrepreneurial orientation as a moderating variable. Managerial orientation emerges as a pivotal factor influencing adoption intention, with producers acknowledging the significance of utilizing digital sales support tools to combat "greenwashing" and elevate their overall brand image. Specifically, it illustrates that producers recognize the potential of digital tools in time-saving and streamlining sales processes, leading to heightened productivity. Moreover, it highlights that the intent to adopt digital sales support tools is influenced by a market mimicry effect. Additionally, it demonstrates a negative association between the intent to adopt these tools and the pressure exerted by institutional partners. Finally, this research establishes a positive link between the intent to adopt digital sales support tools and economic fluctuations, notably during the COVID-19 pandemic. The adoption of sales support tools in agriculture is a multifaceted challenge encompassing three dimensions and nine sub-dimensions. The research delves into the adoption of digital farming technologies at the farm level through the TOE framework. This analysis provides significant insights beneficial for policymakers, stakeholders, and farmers. These insights are instrumental in making informed decisions to facilitate a successful digital transition in agriculture, effectively addressing sector-specific challenges.

Keywords: adoption, digital agriculture, e-commerce, TOE framework

Procedia PDF Downloads 38
302 Cycle-Oriented Building Components and Constructions Made from Paper Materials

Authors: Rebecca Bach, Evgenia Kanli, Nihat Kiziltoprak, Linda Hildebrand, Ulrich Knaack, Jens Schneider

Abstract:

The building industry has a high demand for resources and at the same time is responsible for a significant amount of waste created worldwide. Today's building components need to contribute to the protection of natural resources without creating waste. This is defined in the product development phase and impacts the product’s degree of being cycle-oriented. Paper-based materials show advantage due to their renewable origin and their ability to incorporate different functions. Besides the ecological aspects like renewable origin and recyclability the main advantages of paper materials are its light-weight but stiff structure, the optimized production processes and good insulation values. The main deficits from building technology’s perspective are the material's vulnerability to humidity and water as well as inflammability. On material level, those problems can be solved by coatings or through material modification. On construction level intelligent setup and layering of a building component can improve and also solve these issues. The target of the present work is to provide an overview of developed building components and construction typologies mainly made from paper materials. The research is structured in four parts: (1) functions and requirements, (2) preselection of paper-based materials, (3) development of building components and (4) evaluation. As part of the research methodology at first the needs of the building sector are analyzed with the aim to define the main areas of application and consequently the requirements. Various paper materials are tested in order to identify to what extent the requirements are satisfied and determine potential optimizations or modifications, also in combination with other construction materials. By making use of the material’s potentials and solving the deficits on material and on construction level, building components and construction typologies are developed. The evaluation and the calculation of the structural mechanics and structural principals will show that different construction typologies can be derived. Profiles like paper tubes can be used at best for skeleton constructions. Massive structures on the other hand can be formed by plate-shaped elements like solid board or honeycomb. For insulation purposes corrugated cardboard or cellulose flakes have the best properties, while layered solid board can be applied to prevent inner condensation. Enhancing these properties by material combinations for instance with mineral coatings functional constructions mainly out of paper materials were developed. In summary paper materials offer a huge variety of possible applications in the building sector. By these studies a general base of knowledge about how to build with paper was developed and is to be reinforced by further research.

Keywords: construction typologies, cycle-oriented construction, innovative building material, paper materials, renewable resources

Procedia PDF Downloads 255
301 Developing and integrated Clinical Risk Management Model

Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei

Abstract:

Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.

Keywords: failure modes and effective analysis, risk management, root cause analysis, model

Procedia PDF Downloads 226
300 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection

Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi

Abstract:

During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.

Keywords: coating, stainless steel, tribology, wear

Procedia PDF Downloads 131
299 The Display of Environmental Information to Promote Energy Saving Practices: Evidence from a Massive Behavioral Platform

Authors: T. Lazzarini, M. Imbiki, P. E. Sutter, G. Borragan

Abstract:

While several strategies, such as the development of more efficient appliances, the financing of insulation programs or the rolling out of smart meters represent promising tools to reduce future energy consumption, their implementation relies on people’s decisions-actions. Likewise, engaging with consumers to reshape their behavior has shown to be another important way to reduce energy usage. For these reasons, integrating the human factor in the energy transition has become a major objective for researchers and policymakers. Digital education programs based on tangible and gamified user interfaces have become a new tool with potential effects to reduce energy consumption4. The B2020 program, developed by the firm “Économie d’Énergie SAS”, proposes a digital platform to encourage pro-environmental behavior change among employees and citizens. The platform integrates 160 eco-behaviors to help saving energy and water and reducing waste and CO2 emissions. A total of 13,146 citizens have used the tool so far to declare the range of eco-behaviors they adopt in their daily lives. The present work seeks to build on this database to identify the potential impact of adopted energy-saving behaviors (n=62) to reduce the use of energy in buildings. To this end, behaviors were classified into three categories regarding the nature of its implementation (Eco-habits: e.g., turning-off the light, Eco-actions: e.g., installing low carbon technology such as led light-bulbs and Home-Refurbishments: e.g., such as wall-insulation or double-glazed energy efficient windows). General Linear Models (GLM) disclosed the existence of a significantly higher frequency of Eco-habits when compared to the number of home-refurbishments realized by the platform users. While this might be explained in part by the high financial costs that are associated with home renovation works, it also contrasts with the up to three times larger energy-savings that can be accomplished by these means. Furthermore, multiple regression models failed to disclose the expected relationship between energy-savings and frequency of adopted eco behaviors, suggesting that energy-related practices are not necessarily driven by the correspondent energy-savings. Finally, our results also suggested that people adopting more Eco-habits and Eco-actions were more likely to engage in Home-Refurbishments. Altogether, these results fit well with a growing body of scientific research, showing that energy-related practices do not necessarily maximize utility, as postulated by traditional economic models, and suggest that other variables might be triggering them. Promoting home refurbishments could benefit from the adoption of complementary energy-saving habits and actions.

Keywords: energy-saving behavior, human performance, behavioral change, energy efficiency

Procedia PDF Downloads 174
298 Viability Analysis of a Centralized Hydrogen Generation Plant for Use in Oil Refining Industry

Authors: C. Fúnez Guerra, B. Nieto Calderón, M. Jaén Caparrós, L. Reyes-Bozo, A. Godoy-Faúndez, E. Vyhmeister

Abstract:

The global energy system is experiencing a change of scenery. Unstable energy markets, an increasing focus on climate change and its sustainable development is forcing businesses to pursue new solutions in order to ensure future economic growth. This has led to the interest in using hydrogen as an energy carrier in transportation and industrial applications. As an energy carrier, hydrogen is accessible and holds a high gravimetric energy density. Abundant in hydrocarbons, hydrogen can play an important role in the shift towards low-emission fossil value chains. By combining hydrogen production by natural gas reforming with carbon capture and storage, the overall CO2 emissions are significantly reduced. In addition, the flexibility of hydrogen as an energy storage makes it applicable as a stabilizer in the renewable energy mix. The recent development in hydrogen fuel cells is also raising the expectations for a hydrogen powered transportation sector. Hydrogen value chains exist to a large extent in the industry today. The global hydrogen consumption was approximately 50 million tonnes (7.2 EJ) in 2013, where refineries, ammonia, methanol production and metal processing were main consumers. Natural gas reforming produced 48% of this hydrogen, but without carbon capture and storage (CCS). The total emissions from the production reached 500 million tonnes of CO2, hence alternative production methods with lower emissions will be necessary in future value chains. Hydrogen from electrolysis is used for a wide range of industrial chemical reactions for many years. Possibly, the earliest use was for the production of ammonia-based fertilisers by Norsk Hydro, with a test reactor set up in Notodden, Norway, in 1927. This application also claims one of the world’s largest electrolyser installations, at Sable Chemicals in Zimbabwe. Its array of 28 electrolysers consumes 80 MW per hour, producing around 21,000 Nm3/h of hydrogen. These electrolysers can compete if cheap sources of electricity are available and natural gas for steam reforming is relatively expensive. Because electrolysis of water produces oxygen as a by-product, a system of Autothermal Reforming (ATR) utilizing this oxygen has been analyzed. Replacing the air separation unit with electrolysers produces the required amount of oxygen to the ATR as well as additional hydrogen. The aim of this paper is to evaluate the technical and economic potential of large-scale production of hydrogen for oil refining industry. Sensitivity analysis of parameters such as investment costs, plant operating hours, electricity price and sale price of hydrogen and oxygen are performed.

Keywords: autothermal reforming, electrolyser, hydrogen, natural gas, steam methane reforming

Procedia PDF Downloads 188
297 Obtaining Composite Cotton Fabric by Cyclodextrin Grafting

Authors: U. K. Sahin, N. Erdumlu, C. Saricam, I. Gocek, M. H. Arslan, H. Acikgoz-Tufan, B. Kalav

Abstract:

Finishing is an important part of fabric processing with which a wide range of features are imparted to greige or colored fabrics for various end-uses. Especially, by the addition or impartation of nano-scaled particles to the fabric structure composite fabrics, a kind of composite materials can be acquired. Composite materials, generally shortened as composites or in other words composition materials, are engineered or naturally occurring materials made from two or more component materials with significantly different physical, mechanical or chemical characteristics remaining separate and distinctive at the macroscopic or microscopic scale within the end product structure. Therefore, the technique finishing which is one of the fundamental methods to be applied on fabrics for obtainment of composite fabrics with many functionalities was used in the current study with the same purpose. However, regardless of the finishing materials applied, the efficient life of finished product on offering desired feature is low, since the durability of finishes on the material is limited. Any increase in durability of these finishes on textiles would enhance the life of use for textiles, which will result in happier users. Therefore, in this study, since higher durability was desired for the finishing materials fixed on the fabrics, nano-scaled hollow structured cyclodextrins were chemically imparted by grafting to the structure of conventional cotton fabrics by the help of finishing technique in order to be fixed permanently. By this way, a processed and functionalized base fabric having potential to be treated in the subsequent processes with many different finishing agents and nanomaterials could be obtained. Henceforth, this fabric can be used as a multi-functional fabric due to the encapturing ability of cyclodextrins to molecules/particles via physical/chemical means. In this study, scoured and rinsed woven bleached plain weave 100% cotton fabrics were utilized because textiles made of cotton are the most demanded textile products in the textile market by the textile consumers in daily life. Cotton fabric samples were immersed in treating baths containing β-cyclodextrin and 1,2,3,4-butanetetracarboxylic acid and to reduce the curing temperature the catalyst sodium hypophosphite monohydrate was used. All impregnated fabric samples were pre-dried. The reaction of grafting was performed in dry state. The treated and cured fabric samples were rinsed with warm distilled water and dried. The samples were dried for 4 h and weighed before and after finishing and rinsing. Stability and durability of β-cyclodextrins on fabric surface against external factors such as washing as well as strength of functionalized fabric in terms of tensile and tear strength were tested. Presence and homogeneity of distribution of β-cyclodextrins on fabric surface were characterized.

Keywords: cotton fabric, cyclodextrine, improved durability, multifunctional composite textile

Procedia PDF Downloads 272
296 Challenges Faced in Hospitality and Tourism Education: Rural Versus Urban Universities

Authors: Adelaide Rethabile Motshabi Pitso-Mbili

Abstract:

The disparity between universities in rural and urban areas of South Africa is still an ongoing issue. There are a lot of variations in these universities, such as the performance of the students and the lecturers, which is viewed as a worrying discrepancy related to knowledge gaps or educational inequality. According to research, rural students routinely perform worse than urban students in sub-Saharan Africa, and the disparity is wide when compared to the global average. This may be a result of the various challenges that universities in rural and urban areas face. Hence, the aim of this study was to compare the challenges faced by rural and urban universities, especially in hospitality and tourism programs, and recommend possible solutions. This study used a qualitative methodology and included focus groups and in-depth interviews. Eight focus groups of final-year students in hospitality and tourism programs from four institutions and four department heads of those programs participated in in-depth interviews. Additionally, the study was motivated by the teacher collaboration theory, which proposes that colleagues can help one another for the benefit of students and the institution. It was revealed that rural universities face more challenges than urban universities when it comes to hospitality and tourism education. The results of the interviews showed that universities in rural areas have a high staff turnover rate and offer fewer courses due to a lack of resources, such as the infrastructure, staff, equipment, and materials needed to give students hands-on training on the campus and in various hospitality and tourism programs. Urban universities, on the other hand, provide a variety of courses in the hospitality and tourism areas, and while resources are seldom an issue, they must deal with classes that have large enrolments and insufficient funding to support them all. Additionally, students in remote locations noted that having a lack of water and electricity makes it difficult for them to perform practical lessons. It is recommended that universities work together to collaborate or develop partnerships to help one another overcome obstacles and that universities in rural areas visit those in urban areas to observe how things are done there and to determine where they can improve themselves. The significance of the study is that it will truly bring rural and urban educational processes and practices into greater alignment of standards, benefits, and achievements; this will also help retain staff members within the rural area universities. The present study contributes to the literature by increasing the accumulation of knowledge on research topics, challenges, trends and innovation in hospitality and tourism education and setting forth an agenda for future research. The current study adds to the body of literature by expanding the accumulation of knowledge on research topics that contribute to trends and innovations in hospitality and tourism education and by laying out a plan for future research.

Keywords: hospitality and tourism education, rural and urban universities, collaboration, teacher and student performance, educational inequality

Procedia PDF Downloads 36
295 RAD-Seq Data Reveals Evidence of Local Adaptation between Upstream and Downstream Populations of Australian Glass Shrimp

Authors: Sharmeen Rahman, Daniel Schmidt, Jane Hughes

Abstract:

Paratya australiensis Kemp (Decapoda: Atyidae) is a widely distributed indigenous freshwater shrimp, highly abundant in eastern Australia. This species has been considered as a model stream organism to study genetics, dispersal, biology, behaviour and evolution in Atyids. Paratya has a filter feeding and scavenging habit which plays a significant role in the formation of lotic community structure. It has been shown to reduce periphyton and sediment from hard substrates of coastal streams and hence acts as a strongly-interacting ecosystem macroconsumer. Besides, Paratya is one of the major food sources for stream dwelling fishes. Paratya australiensis is a cryptic species complex consisting of 9 highly divergent mitochondrial DNA lineages. Among them, one lineage has been observed to favour upstream sites at higher altitudes, with cooler water temperatures. This study aims to identify local adaptation in upstream and downstream populations of this lineage in three streams in the Conondale Range, North-eastern Brisbane, Queensland, Australia. Two populations (up and down stream) from each stream have been chosen to test for local adaptation, and a parallel pattern of adaptation is expected across all streams. Six populations each consisting of 24 individuals were sequenced using the Restriction Site Associated DNA-seq (RAD-seq) technique. Genetic markers (SNPs) were developed using double digest RAD sequencing (ddRAD-seq). These were used for de novo assembly of Paratya genome. De novo assembly was done using the STACKs program and produced 56, 344 loci for 47 individuals from one stream. Among these individuals, 39 individuals shared 5819 loci, and these markers are being used to test for local adaptation using Fst outlier tests (Arlequin) and Bayesian analysis (BayeScan) between up and downstream populations. Fst outlier test detected 27 loci likely to be under selection and the Bayesian analysis also detected 27 loci as under selection. Among these 27 loci, 3 loci showed evidence of selection at a significance level using BayeScan program. On the other hand, up and downstream populations are strongly diverged at neutral loci with a Fst =0.37. Similar analysis will be done with all six populations to determine if there is a parallel pattern of adaptation across all streams. Furthermore, multi-locus among population covariance analysis will be done to identify potential markers under selection as well as to compare single locus versus multi-locus approaches for detecting local adaptation. Adaptive genes identified in this study can be used for future studies to design primers and test for adaptation in related crustacean species.

Keywords: Paratya australiensis, rainforest streams, selection, single nucleotide polymorphism (SNPs)

Procedia PDF Downloads 232
294 Moodle-Based E-Learning Course Development for Medical Interpreters

Authors: Naoko Ono, Junko Kato

Abstract:

According to the Ministry of Justice, 9,044,000 foreigners visited Japan in 2010. The number of foreign residents in Japan was over 2,134,000 at the end of 2010. Further, medical tourism has emerged as a new area of business. Against this background, language barriers put the health of foreigners in Japan at risk, because they have difficulty in accessing health care and communicating with medical professionals. Medical interpreting training is urgently needed in response to language problems resulting from the rapid increase in the number of foreign workers in Japan over recent decades. Especially, there is a growing need in medical settings in Japan to speak international languages for communication, with Tokyo selected as the host city of the 2020 Summer Olympics. Due to the limited number of practical activities on medical interpreting, it is difficult for learners to acquire the interpreting skills. In order to eliminate the shortcoming, a web-based English-Japanese medical interpreting training system was developed. We conducted a literature review to identify learning contents, core competencies for medical interpreters by using Pubmed, PsycINFO, Cochrane Library, and Google Scholar. Selected papers were investigated to find core competencies in medical interpreting. Eleven papers were selected through literature review indicating core competencies for medical interpreters. Core competencies in medical interpreting abstracted from the literature review, showed consistency in previous research whilst the content of the programs varied in domestic and international training programs for medical interpreters. Results of the systematic review indicated five core competencies: (a) maintaining accuracy and completeness; (b) medical terminology and understanding the human body; (c) behaving ethically and making ethical decisions; (d) nonverbal communication skills; and (e) cross-cultural communication skills. We developed an e-leaning program for training medical interpreters. A Web-based Medical Interpreter Training Program which cover these competencies was developed. The program included the following : online word list (Quizlet), allowing student to study online and on their smartphones; self-study tool (Quizlet) for help with dictation and spelling; word quiz (Quizlet); test-generating system (Quizlet); Interactive body game (BBC);Online resource for understanding code of ethics in medical interpreting; Webinar about non-verbal communication; and Webinar about incompetent vs. competent cultural care. The design of a virtual environment allows the execution of complementary experimental exercises for learners of medical interpreting and introduction to theoretical background of medical interpreting. Since this system adopts a self-learning style, it might improve the time and lack of teaching material restrictions of the classroom method. In addition, as a teaching aid, virtual medical interpreting is a powerful resource for the understanding how actual medical interpreting can be carried out. The developed e-learning system allows remote access, enabling students to perform experiments at their own place, without being physically in the actual laboratory. The web-based virtual environment empowers students by granting them access to laboratories during their free time. A practical example will be presented in order to show capabilities of the system. The developed web-based training program for medical interpreters could bridge the gap between medical professionals and patients with limited English proficiency.

Keywords: e-learning, language education, moodle, medical interpreting

Procedia PDF Downloads 341
293 Effect of High Dose of Black Tea Extract on Physiological Parameters of Mother and Pups in Experimental Albino Rats

Authors: Avijit Dey, Antony Gomes, Subir Chandra Dasgupta

Abstract:

Tea (Camellia sinensis) is the most popular beverages in the world and is ranked second after the water. Tea has been considered as a health promoting beverage since ancient times due to its health-promoting activity. Recently, immunomodulatory, anti-arthritic, antioxidant, anticancer and cardioprotective activity of tea has been established. Very few studies have demonstrated the effect of high dose of black tea on health. The aim of the present study was to evaluate the role of low & high dose of Black Tea Extract (BTE) on the different physiological parameters of mother and pups during prenatal and postnatal developmental period in the experimental rodent. BTE was orally administered in LD (50mg BTE/kg/day) and HD (100mg BTE/kg/day) except control groups of rats (n=6/group) throughout the prenatal (day 0-21) and postnatal (day 21-42) periods. During prenatal period (0, 7th, 14th, 20th days) body weight, urinary calcium, magnesium, urea and creatinine was measured. In postnatal period physical (0, 10th, 21th days) parameters of pups like body weight, cranial length, cranial diameter, neck width, tail length, craniosacral length of pups were analyzed. Liver and lungs from pups and kidney spleen, etc. from mothers were collected on day 42 for histopathological studies. The comparative urine strip and morphology of RBC was also analyzed by SEM from mothers of different groups on day 42. The level of cytokines like IL-1alpha, IL-1beta, IL-6, IL-10, TNF-alpha were analysed by enzyme-linked immunosorbent assay (ELISA) on day 0, day 20 and day 42. The body weight of LD and HD mothers were also significantly (P<0.05) less than control mothers at 20th day of pregnancy and there was also significant changes in urinary calcium, urea, creatinine. The bio morphometric analysis of pups showed significant alteration (P<0.05) in HD groups relative to control. Some histological alterations were also observed in pups and mothers. Comparative urine strip analysis and morphology of RBC showed significant changes in treated groups. LD and HD treated mothers showed an increase in proinflammatory cytokines like IL-1beta, TNF-alpha and decrease in anti-inflammatory cytokine-like IL-10 on day 20 compared to PC mothers. This study clearly indicated that high dose of BTE possesses detrimental effect on pregnant mother and the pup. Further studies are in progress to elucidate the molecular mechanism of actions. This project work has been sponsored by National Tea Research Foundation vide Project Sanction No.: 17 (305)/2013/4423 dated 11th March, 2014. All experimental protocols described in the study were approved by animal ethics committee.

Keywords: black tea extract, pregnancy, prenatal and postnatal development, inflammation

Procedia PDF Downloads 256
292 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 119
291 An Integrated Framework for Wind-Wave Study in Lakes

Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung

Abstract:

The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.

Keywords: wave modelling, wind-wave, extreme value analysis, marina

Procedia PDF Downloads 60
290 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography

Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld

Abstract:

With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.

Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry

Procedia PDF Downloads 289
289 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation

Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi

Abstract:

The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.

Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa

Procedia PDF Downloads 145
288 Predictability of Kiremt Rainfall Variability over the Northern Highlands of Ethiopia on Dekadal and Monthly Time Scales Using Global Sea Surface Temperature

Authors: Kibrom Hadush

Abstract:

Countries like Ethiopia, whose economy is mainly rain-fed dependent agriculture, are highly vulnerable to climate variability and weather extremes. Sub-seasonal (monthly) and dekadal forecasts are hence critical for crop production and water resource management. Therefore, this paper was conducted to study the predictability and variability of Kiremt rainfall over the northern half of Ethiopia on monthly and dekadal time scales in association with global Sea Surface Temperature (SST) at different lag time. Trends in rainfall have been analyzed on annual, seasonal (Kiremt), monthly, and dekadal (June–September) time scales based on rainfall records of 36 meteorological stations distributed across four homogenous zones of the northern half of Ethiopia for the period 1992–2017. The results from the progressive Mann–Kendall trend test and the Sen’s slope method shows that there is no significant trend in the annual, Kiremt, monthly and dekadal rainfall total at most of the station's studies. Moreover, the rainfall in the study area varies spatially and temporally, and the distribution of the rainfall pattern increases from the northeast rift valley to northwest highlands. Methods of analysis include graphical correlation and multiple linear regression model are employed to investigate the association between the global SSTs and Kiremt rainfall over the homogeneous rainfall zones and to predict monthly and dekadal (June-September) rainfall using SST predictors. The results of this study show that in general, SST in the equatorial Pacific Ocean is the main source of the predictive skill of the Kiremt rainfall variability over the northern half of Ethiopia. The regional SSTs in the Atlantic and the Indian Ocean as well contribute to the Kiremt rainfall variability over the study area. Moreover, the result of the correlation analysis showed that the decline of monthly and dekadal Kiremt rainfall over most of the homogeneous zones of the study area are caused by the corresponding persistent warming of the SST in the eastern and central equatorial Pacific Ocean during the period 1992 - 2017. It is also found that the monthly and dekadal Kiremt rainfall over the northern, northwestern highlands and northeastern lowlands of Ethiopia are positively correlated with the SST in the western equatorial Pacific, eastern and tropical northern the Atlantic Ocean. Furthermore, the SSTs in the western equatorial Pacific and Indian Oceans are positively correlated to the Kiremt season rainfall in the northeastern highlands. Overall, the results showed that the prediction models using combined SSTs at various ocean regions (equatorial and tropical) performed reasonably well in the prediction (With R2 ranging from 30% to 65%) of monthly and dekadal rainfall and recommends it can be used for efficient prediction of Kiremt rainfall over the study area to aid with systematic and informed decision making within the agricultural sector.

Keywords: dekadal, Kiremt rainfall, monthly, Northern Ethiopia, sea surface temperature

Procedia PDF Downloads 124
287 Baricitinib Lipid-based Nanosystems as a Topical Alternative for Atopic Dermatitis Treatment

Authors: N. Garrós, P. Bustos, N. Beirampour, R. Mohammadi, M. Mallandrich, A.C. Calpena, H. Colom

Abstract:

Atopic dermatitis (AD) is a persistent skin condition characterized by chronic inflammation caused by an autoimmune response. It is a prevalent clinical issue that requires continual treatment to enhance the patient's quality of life. Systemic therapy often involves the use of glucocorticoids or immunosuppressants to manage symptoms. Our objective was to create and assess topical liposomal formulations containing Baricitinib (BNB), a reversible inhibitor of Janus-associated kinase (JAK), which is involved in various immune responses. These formulations were intended to address flare-ups and improve treatment outcomes for AD. We created three distinct liposomal formulations by combining different amounts of 1-palmitoyl-2-oleoyl-glycero-3-phosphocholine (POPC), cholesterol (CHOL), and ceramide (CER): (i) pure POPC, (ii) POPC mixed with CHOL (at a ratio of 8:2, mol/mol), and (iii) POPC mixed with CHOL and CER (at a ratio of 3.6:2.4:4.0 mol/mol/mol). We conducted various tests to determine the formulations' skin tolerance, irritancy capacity, and their ability to cause erythema and edema on altered skin. We also assessed the transepidermal water loss (TEWL) and skin hydration of rabbits to evaluate the efficacy of the formulations. Histological analysis, the HET-CAM test, and the modified Draize test were all used in the evaluation process. The histological analysis revealed that liposome POPC and POPC:CHOL avoided any damage to the tissues structures. The HET-CAM test showed no irritation effect caused by any of the three liposomes, and the modified Draize test showed a good Draize score for erythema and edema. Liposome POPC effectively counteracted the impact of xylol on the skin, and no erythema or edema was observed during the study. TEWL values were constant for all the liposomes with similar values to the negative control (within the range 8 - 15 g/h·m2, which means a healthy value for rabbits), whereas the positive control showed a significant increase. The skin hydration values were constant and followed the trend of the negative control, while the positive control showed a steady increase during the tolerance study. In conclusion, the developed formulations containing BNB exhibited no harmful or irritating effects, they did not demonstrate any irritant potential in the HET-CAM test and liposomes POPC and POPC:CHOL did not cause any structural alteration according to the histological analysis. These positive findings suggest that additional research is necessary to evaluate the efficacy of these liposomal formulations in animal models of the disease, including mutant animals. Furthermore, before proceeding to clinical trials, biochemical investigations should be conducted to better understand the mechanisms of action involved in these formulations.

Keywords: baricitinib, HET-CAM test, histological study, JAK inhibitor, liposomes, modified draize test

Procedia PDF Downloads 71
286 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 129
285 Intrigues of Brand Activism versus Brand Antagonism in Rival Online Football Brand Communities: The Case of the Top Two Premier Football Clubs in Ghana

Authors: Joshua Doe, George Amoako

Abstract:

Purpose: In an increasingly digital world, the realm of sports fandom has extended its borders, creating a vibrant ecosystem of online communities centered around football clubs. This study ventures into the intricate interplay of motivations that drive football fans to respond to brand activism and its profound implications for brand antagonism and engagement among two of Ghana's most revered premier football clubs. Methods: A sample of 459 fervent fans from these two rival clubs were engaged through self-administered questionnaires expertly distributed via social media and online platforms. Data was analysed, using PLS-SEM. Findings: The tapestry of motivations that weave through these online football communities is as diverse as the fans themselves. It becomes apparent that fans are propelled by a spectrum of incentives. They seek education, yearn for information, revel in entertainment, embrace socialization, and fortify their self-esteem through their interactions within these digital spaces. Yet, it is the nuanced distinction in these motivations that shapes the trajectory of brand antagonism and engagement. Surprisingly, the study reveals a remarkable pattern. Football fans, despite their fierce rivalries, do not engage in brand antagonism based on educational pursuits, information-seeking endeavors, or socialization. Instead, it is motivations rooted in entertainment and self-esteem that serve as the fertile grounds for brand antagonism. Paradoxically, it is these very motivations coupled with the desire for socialization that nurture brand engagement, manifesting as active support and advocacy for their chosen club brand. Originality: Our research charters new waters by extending the boundaries of existing theories in the field. The Technology Acceptance Uses and Gratifications Theory, and Social Identity Theory all find new dimensions within the context of online brand community engagement. This not only deepens our understanding of the multifaceted world of online football fandom but also invites us to explore the implications these insights carry within the digital realm. Contribution to Practice: For marketers, our findings offer a treasure trove of actionable insights. They beckon the development of targeted content strategies that resonate with fan motivations. The implementation of brand advocacy programs, fostering opportunities for socialization, and the effective management of brand antagonism emerge as pivotal strategies. Furthermore, the utilization of data-driven insights is poised to refine consumer engagement strategies and strengthen brand affinity. Future Studies: For future studies, we advocate for longitudinal, cross-cultural, and qualitative studies that could shed further light on this topic. Comparative analyses across different types of online brand communities, an exploration of the role of brand community leaders, and inquiries into the factors that contribute to brand community dissolution all beckon the research community. Furthermore, understanding motivation-specific antagonistic behaviors and the intricate relationship between information-seeking and engagement present exciting avenues for further exploration. This study unfurls a vibrant tapestry of fan motivations, brand activism, and rivalry within online football communities. It extends a hand to scholars and marketers alike, inviting them to embark on a journey through this captivating digital realm, where passion, rivalry, and engagement harmonize to shape the world of sports fandom as we know it.

Keywords: online brand engagement, football fans, brand antagonism, motivations

Procedia PDF Downloads 44
284 Direct Current Grids in Urban Planning for More Sustainable Urban Energy and Mobility

Authors: B. Casper

Abstract:

The energy transition towards renewable energies and drastically reduced carbon dioxide emissions in Germany drives multiple sectors into a transformation process. Photovoltaic and on-shore wind power are predominantly feeding in the low and medium-voltage grids. The electricity grid is not laid out to allow an increasing feed-in of power in low and medium voltage grids. Electric mobility is currently in the run-up phase in Germany and still lacks a significant amount of charging stations. The additional power demand by e-mobility cannot be supplied by the existing electric grids in most cases. The future demands in heating and cooling of commercial and residential buildings are increasingly generated by heat-pumps. Yet the most important part in the energy transition is the storage of surplus energy generated by photovoltaic and wind power sources. Water electrolysis is one way to store surplus energy known as power-to-gas. With the vehicle-to-grid technology, the upcoming fleet of electric cars could be used as energy storage to stabilize the grid. All these processes use direct current (DC). The demand of bi-directional flow and higher efficiency in the future grids can be met by using DC. The Flexible Electrical Networks (FEN) research campus at RWTH Aachen investigates interdisciplinary about the advantages, opportunities, and limitations of DC grids. This paper investigates the impact of DC grids as a technological innovation on the urban form and urban life. Applying explorative scenario development, analyzation of mapped open data sources on grid networks and research-by-design as a conceptual design method, possible starting points for a transformation to DC medium voltage grids could be found. Several fields of action have emerged in which DC technology could become a catalyst for future urban development: energy transition in urban areas, e-mobility, and transformation of the network infrastructure. The investigation shows a significant potential to increase renewable energy production within cities with DC grids. The charging infrastructure for electric vehicles will predominantly be using DC in the future because fast and ultra fast charging can only be achieved with DC. Our research shows that e-mobility, combined with autonomous driving has the potential to change the urban space and urban logistics fundamentally. Furthermore, there are possible win-win-win solutions for the municipality, the grid operator and the inhabitants: replacing overhead transmission lines by underground DC cables to open up spaces in contested urban areas can lead to a positive example of how the energy transition can contribute to a more sustainable urban structure. The outlook makes clear that target grid planning and urban planning will increasingly need to be synchronized.

Keywords: direct current, e-mobility, energy transition, grid planning, renewable energy, urban planning

Procedia PDF Downloads 103
283 High Capacity SnO₂/Graphene Composite Anode Materials for Li-Ion Batteries

Authors: Hilal Köse, Şeyma Dombaycıoğlu, Ali Osman Aydın, Hatem Akbulut

Abstract:

Rechargeable lithium-ion batteries (LIBs) have become promising power sources for a wide range of applications, such as mobile communication devices, portable electronic devices and electrical/hybrid vehicles due to their long cycle life, high voltage and high energy density. Graphite, as anode material, has been widely used owing to its extraordinary electronic transport properties, large surface area, and high electrocatalytic activities although its limited specific capacity (372 mAh g-1) cannot fulfil the increasing demand for lithium-ion batteries with higher energy density. To settle this problem, many studies have been taken into consideration to investigate new electrode materials and metal oxide/graphene composites are selected as a kind of promising material for lithium ion batteries as their specific capacities are much higher than graphene. Among them, SnO₂, an n-type and wide band gap semiconductor, has attracted much attention as an anode material for the new-generation lithium-ion batteries with its high theoretical capacity (790 mAh g-1). However, it suffers from large volume changes and agglomeration associated with the Li-ion insertion and extraction processes, which brings about failure and loss of electrical contact of the anode. In addition, there is also a huge irreversible capacity during the first cycle due to the formation of amorphous Li₂O matrix. To obtain high capacity anode materials, we studied on the synthesis and characterization of SnO₂-Graphene nanocomposites and investigated the capacity of this free-standing anode material in this work. For this aim, firstly, graphite oxide was obtained from graphite powder using the method described by Hummers method. To prepare the nanocomposites as free-standing anode, graphite oxide particles were ultrasonicated in distilled water with SnO2 nanoparticles (1:1, w/w). After vacuum filtration, the GO-SnO₂ paper was peeled off from the PVDF membrane to obtain a flexible, free-standing GO paper. Then, GO structure was reduced in hydrazine solution. Produced SnO2- graphene nanocomposites were characterized by scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), and X-ray diffraction (XRD) analyses. CR2016 cells were assembled in a glove box (MBraun-Labstar). The cells were charged and discharged at 25°C between fixed voltage limits (2.5 V to 0.2 V) at a constant current density on a BST8-MA MTI model battery tester with 0.2C charge-discharge rate. Cyclic voltammetry (CV) was performed at the scan rate of 0.1 mVs-1 and electrochemical impedance spectroscopy (EIS) measurements were carried out using Gamry Instrument applying a sine wave of 10 mV amplitude over a frequency range of 1000 kHz-0.01 Hz.

Keywords: SnO₂-graphene, nanocomposite, anode, Li-ion battery

Procedia PDF Downloads 211
282 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 115
281 From Talk to Action-Tackling Africa’s Pollution and Climate Change Problem

Authors: Ngabirano Levis

Abstract:

One of Africa’s major environmental challenges remains air pollution. In 2017, UNICEF estimated over 400,000 children in Africa died as a result of indoor pollution, while 350 million children remain exposed to the risks of indoor pollution due to the use of biomass and burning of wood for cooking. Over time, indeed, the major causes of mortality across Africa are shifting from the unsafe water, poor sanitation, and malnutrition to the ambient and household indoor pollution, and greenhouse gas (GHG) emissions remain a key factor in this. In addition, studies by the OECD estimated that the economic cost of premature deaths due to Ambient Particulate Matter Pollution (APMP) and Household Air Pollution across Africa in 2013 was about 215 Billion US Dollars and US 232 Billion US Dollars, respectively. This is not only a huge cost for a continent where over 41% of the Sub-Saharan population lives on less than 1.9 US Dollars a day but also makes the people extremely vulnerable to the negative climate change and environmental degradation effects. Such impacts have led to extended droughts, flooding, health complications, and reduced crop yields hence food insecurity. Climate change, therefore, poses a threat to global targets like poverty reduction, health, and famine. Despite efforts towards mitigation, air contributors like carbon dioxide emissions are on a generally upward trajectory across Africa. In Egypt, for instance, emission levels had increased by over 141% in 2010 from the 1990 baseline. Efforts like the climate change adaptation and mitigation financing have also hit obstacles on the continent. The International Community and developed nations stress that Africa still faces challenges of limited human, institutional and financial systems capable of attracting climate funding from these developed economies. By using the qualitative multi-case study method supplemented by interviews of key actors and comprehensive textual analysis of relevant literature, this paper dissects the key emissions and air pollutant sources, their impact on the well-being of the African people, and puts forward suggestions as well as a remedial mechanism to these challenges. The findings reveal that whereas climate change mitigation plans appear comprehensive and good on paper for many African countries like Uganda; the lingering political interference, limited research guided planning, lack of population engagement, irrational resource allocation, and limited system and personnel capacity has largely impeded the realization of the set targets. Recommendations have been put forward to address the above climate change impacts that threaten the food security, health, and livelihoods of the people on the continent.

Keywords: Africa, air pollution, climate change, mitigation, emissions, effective planning, institutional strengthening

Procedia PDF Downloads 57
280 Pre-Cooling Strategies for the Refueling of Hydrogen Cylinders in Vehicular Transport

Authors: C. Hall, J. Ramos, V. Ramasamy

Abstract:

Hydrocarbon-based fuel vehicles are a major contributor to air pollution due to harmful emissions produced, leading to a demand for cleaner fuel types. A leader in this pursuit is hydrogen, with its application in vehicles producing zero harmful emissions and the only by-product being water. To compete with the performance of conventional vehicles, hydrogen gas must be stored on-board of vehicles in cylinders at high pressures (35–70 MPa) and have a short refueling duration (approximately 3 mins). However, the fast-filling of hydrogen cylinders causes a significant rise in temperature due to the combination of the negative Joule-Thompson effect and the compression of the gas. This can lead to structural failure and therefore, a maximum allowable internal temperature of 85°C has been imposed by the International Standards Organization. The technological solution to tackle the issue of rapid temperature rise during the refueling process is to decrease the temperature of the gas entering the cylinder. Pre-cooling of the gas uses a heat exchanger and requires energy for its operation. Thus, it is imperative to determine the least amount of energy input that is required to lower the gas temperature for cost savings. A validated universal thermodynamic model is used to identify an energy-efficient pre-cooling strategy. The model requires negligible computational time and is applied to previously validated experimental cases to optimize pre-cooling requirements. The pre-cooling characteristics include the location within the refueling timeline and its duration. A constant pressure-ramp rate is imposed to eliminate the effects of rapid changes in mass flow rate. A pre-cooled gas temperature of -40°C is applied, which is the lowest allowable temperature. The heat exchanger is assumed to be ideal with no energy losses. The refueling of the cylinders is modeled with the pre-cooling split in ten percent time intervals. Furthermore, varying burst durations are applied in both the early and late stages of the refueling procedure. The model shows that pre-cooling in the later stages of the refuelling process is more energy-efficient than early pre-cooling. In addition, the efficiency of pre-cooling towards the end of the refueling process is independent of the pressure profile at the inlet. This leads to the hypothesis that pre-cooled gas should be applied as late as possible in the refueling timeline and at very low temperatures. The model had shown a 31% reduction in energy demand whilst achieving the same final gas temperature for a refueling scenario when pre-cooling was applied towards the end of the process. The identification of the most energy-efficient refueling approaches whilst adhering to the safety guidelines is imperative to reducing the operating cost of hydrogen refueling stations. Heat exchangers are energy-intensive and thus, reducing the energy requirement would lead to cost reduction. This investigation shows that pre-cooling should be applied as late as possible and for short durations.

Keywords: cylinder, hydrogen, pre-cooling, refueling, thermodynamic model

Procedia PDF Downloads 79
279 Carboxyfullerene-Modified Titanium Dioxide Nanoparticles in Singlet Oxygen and Hydroxyl Radicals Scavenging Activity

Authors: Kai-Cheng Yang, Yen-Ling Chen, Er-Chieh Cho, Kuen-Chan Lee

Abstract:

Titanium dioxide nanomaterials offer superior protection for human skin against the full spectrum of ultraviolet light. However, some literature reviews indicated that it might be associated with adverse effects such as cytotoxicity or reactive oxygen species (ROS) due to their nanoscale. The surface of fullerene is covered with π electrons constituting aromatic structures, which can effectively scavenge large amount of radicals. Unfortunately, fullerenes are poor solubility in water, severe aggregation, and toxicity in biological applications when dispersed in solvent have imposed the limitations to the use of fullerenes. Carboxyfullerene acts as the scavenger of radicals for several years. Some reports indicate that carboxyfullerene not only decrease the concentration of free radicals in ambience but also prevent cells from reducing the number or apoptosis under UV irradiation. The aim of this study is to decorate fullerene –C70-carboxylic acid (C70-COOH) on the surface of titanium dioxide nanoparticles (P25) for the purpose of scavenging ROS during the irradiation. The modified material is prepared through the esterification of C70-COOH with P25 (P25/C70-COOH). The binding edge and structure are studied by using Transmission electron microscope (TEM) and Fourier transform infrared (FTIR). The diameter of P25 is about 30 nm and C70-COOH is found to be conjugated on the edge of P25 in aggregation morphology with the size of ca. 100 nm. In the next step, the FTIR was used to confirm the binding structure between P25 and C70-COOH. There are two new peaks are shown at 1427 and 1720 cm-1 for P25/C70-COOH, resulting from the C–C stretch and C=O stretch formed during esterification with dilute sulfuric acid. The IR results further confirm the chemically bonded interaction between C70-COOH and P25. In order to provide the evidence of scavenging radical ability of P25/C70-COOH, we chose pyridoxine (Vit.B6) and terephthalic acid (TA) to react with singlet oxygen and hydroxyl radicals. We utilized these chemicals to observe the radicals scavenging statement via detecting the intensity of ultraviolet adsorption or fluorescence emission. The UV spectra are measured by using different concentration of C70-COOH modified P25 with 1mM pyridoxine under UV irradiation for various duration times. The results revealed that the concentration of pyridoxine was increased when cooperating with P25/C70-COOH after three hours as compared with control (only P25). It indicates fewer radicals could be reacted with pyridoxine because of the absorption via P25/C70-COOH. The fluorescence spectra are observed by measuring P25/C70-COOH with 1mM terephthalic acid under UV irradiation for various duration times. The fluorescence intensity of TAOH was decreased in ten minutes when cooperating with P25/C70-COOH. Here, it was found that the fluorescence intensity was increased after thirty minutes, which could be attributed to the saturation of C70-COOH in the absorption of radicals. However, the results showed that the modified P25/C70-COOH could reduce the radicals in the environment. Therefore, we expect that P25/C70-COOH is a potential materials in using for antioxidant.

Keywords: titanium dioxide, fullerene, radical scavenging activity, antioxidant

Procedia PDF Downloads 386
278 Life Cycle Assessment of a Parabolic Solar Cooker

Authors: Bastien Sanglard, Lou Magnat, Ligia Barna, Julian Carrey, Sébastien Lachaize

Abstract:

Cooking is a primary need for humans, several techniques being used around the globe based on different sources of energy: electricity, solid fuel (wood, coal...), fuel or liquefied petroleum gas. However, all of them leads to direct or indirect greenhouse gas emissions and sometimes health damage in household. Therefore, the solar concentrated power represent a great option to lower the damages because of a cleaner using phase. Nevertheless, the construction phase of the solar cooker still requires primary energy and materials, which leads to environmental impacts. The aims of this work is to analyse the ecological impacts of a commercialaluminium parabola and to compare it with other means of cooking, taking the boiling of 2 litres of water three times a day during 40 years as the functional unit. Life cycle assessment was performed using the software Umberto and the EcoInvent database. Calculations were realized over more than 13 criteria using two methods: the international panel on climate change method and the ReCiPe method. For the reflector itself, different aluminium provenances were compared, as well as the use of recycled aluminium. For the structure, aluminium was compared to iron (primary and recycled) and wood. Results show that climate impacts of the studied parabola was 0.0353 kgCO2eq/kWh when built with Chinese aluminium and can be reduced by 4 using aluminium from Canada. Assessment also showed that using 32% of recycled aluminium would reduce the impact by 1.33 and 1.43 compared to the use of primary Canadian aluminium and primary Chinese aluminium, respectively. The exclusive use of recycled aluminium lower the impact by 17. Besides, the use of iron (recycled or primary) or wood for the structure supporting the reflector significantly lowers the impact. The impact categories of the ReCiPe method show that the parabola made from Chinese aluminium has the heaviest impact - except for metal resource depletion - compared to aluminium from Canada, recycled aluminium or iron. Impact of solar cooking was then compared to gas stove and induction. The gas stove model was a cast iron tripod that supports the cooking pot, and the induction plate was as well a single spot plate. Results show the parabolic solar cooker has the lowest ecological impact over the 13 criteria of the ReCiPe method and over the global warming potential compared to the two other technologies. The climate impact of gas cooking is 0.628kgCO2/kWh when used with natural gas and 0.723 kgCO2/kWh when used with a bottle of gas. In each case, the main part of emissions came from gas burning. Induction cooking has a global warming potential of 0.12 kgCO2eq/kWh with the electricity mix of France, 96.3% of the impact being due to electricity production. Therefore, the electricity mix is a key factor for this impact: for instance, with the electricity mix of Germany and Poland, impacts are 0.81kgCO2eq/kWh and 1.39 kgCO2eq/kWh, respectively. Therefore, the parabolic solar cooker has a real ecological advantages compared to both gas stove and induction plate.

Keywords: life cycle assessement, solar concentration, cooking, sustainability

Procedia PDF Downloads 153
277 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition

Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang

Abstract:

Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.

Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model

Procedia PDF Downloads 91
276 Assessing Prescribed Burn Severity in the Wetlands of the Paraná River -Argentina

Authors: Virginia Venturini, Elisabet Walker, Aylen Carrasco-Millan

Abstract:

Latin America stands at the front of climate change impacts, with forecasts projecting accelerated temperature and sea level rises compared to the global average. These changes are set to trigger a cascade of effects, including coastal retreat, intensified droughts in some nations, and heightened flood risks in others. In Argentina, wildfires historically affected forests, but since 2004, wetland fires have emerged as a pressing concern. By 2021, the wetlands of the Paraná River faced a dangerous situation. In fact, during the year 2021, a high-risk scenario was naturally formed in the wetlands of the Paraná River, in Argentina. Very low water levels in the rivers, and excessive standing dead plant material (fuel), triggered most of the fires recorded in the vast wetland region of the Paraná during 2020-2021. During 2008 fire events devastated nearly 15% of the Paraná Delta, and by late 2021 new fires burned more than 300,000 ha of these same wetlands. Therefore, the goal of this work is to explore remote sensing tools to monitor environmental conditions and the severity of prescribed burns in the Paraná River wetlands. Thus, two prescribed burning experiments were carried out in the study area (31°40’ 05’’ S, 60° 34’ 40’’ W) during September 2023. The first experiment was carried out on Sept. 13th, in a plot of 0.5 ha which dominant vegetation were Echinochloa sp., and Thalia, while the second trial was done on Sept 29th in a plot of 0.7 ha, next to the first burned parcel; here the dominant vegetation species were Echinochloa sp. and Solanum glaucophyllum. Field campaigns were conducted between September 8th and November 8th to assess the severity of the prescribed burns. Flight surveys were conducted utilizing a DJI® Inspire II drone equipped with a Sentera® NDVI camera. Then, burn severity was quantified by analyzing images captured by the Sentera camera along with data from the Sentinel 2 satellite mission. This involved subtracting the NDVI images obtained before and after the burn experiments. The results from both data sources demonstrate a highly heterogeneous impact of fire within the patch. Mean severity values obtained with drone NDVI images of the first experience were about 0.16 and 0.18 with Sentinel images. For the second experiment, mean values obtained with the drone were approximately 0.17 and 0.16 with Sentinel images. Thus, most of the pixels showed low fire severity and only a few pixels presented moderated burn severity, based on the wildfire scale. The undisturbed plots maintained consistent mean NDVI values throughout the experiments. Moreover, the severity assessment of each experiment revealed that the vegetation was not completely dry, despite experiencing extreme drought conditions.

Keywords: prescribed-burn, severity, NDVI, wetlands

Procedia PDF Downloads 36