Search results for: code blue drill
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2095

Search results for: code blue drill

115 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows

Authors: D. M. Lewis, L. Frisby, U. Stead

Abstract:

Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.

Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations

Procedia PDF Downloads 101
114 Disclosure on Adherence of the King Code's Audit Committee Guidance: Cluster Analyses to Determine Strengths and Weaknesses

Authors: Philna Coetzee, Clara Msiza

Abstract:

In modern society, audit committees are seen as the custodians of accountability and the conscience of management and the board. But who holds the audit committee accountable for their actions or non-actions and how do we know what they are supposed to be doing and what they are doing? The purpose of this article is to provide greater insight into the latter part of this problem, namely, determine what best practises for audit committees and the disclosure of what is the realities are. In countries where governance is well established, the roles and responsibilities of the audit committee are mostly clearly guided by legislation and/or guidance documents, with countries increasingly providing guidance on this topic. With high cost involved to adhere to governance guidelines, the public (for public organisations) and shareholders (for private organisations) expect to see the value of their ‘investment’. For audit committees, the dividends on the investment should reflect in less fraudulent activities, less corruption, higher efficiency and effectiveness, improved social and environmental impact, and increased profits, to name a few. If this is not the case (which is reflected in the number of fraudulent activities in both the private and the public sector), stakeholders have the right to ask: where was the audit committee? Therefore, the objective of this article is to contribute to the body of knowledge by comparing the adherence of audit committee to best practices guidelines as stipulated in the King Report across public listed companies, national and provincial government departments, state-owned enterprises and local municipalities. After constructs were formed, based on the literature, factor analyses were conducted to reduce the number of variables in each construct. Thereafter, cluster analyses, which is an explorative analysis technique that classifies a set of objects in such a way that objects that are more similar are grouped into the same group, were conducted. The SPSS TwoStep Clustering Component was used, being capable of handling both continuous and categorical variables. In the first step, a pre-clustering procedure clusters the objects into small sub-clusters, after which it clusters these sub-clusters into the desired number of clusters. The cluster analyses were conducted for each construct and the measure, namely the audit opinion as listed in the external audit report, were included. Analysing 228 organisations' information, the results indicate that there is a clear distinction between the four spheres of business that has been included in the analyses, indicating certain strengths and certain weaknesses within each sphere. The results may provide the overseers of audit committees’ insight into where a specific sector’s strengths and weaknesses lie. Audit committee chairs will be able to improve the areas where their audit committee is lacking behind. The strengthening of audit committees should result in an improvement of the accountability of boards, leading to less fraud and corruption.

Keywords: audit committee disclosure, cluster analyses, governance best practices, strengths and weaknesses

Procedia PDF Downloads 167
113 The Recognition of Exclusive Choice of Court Agreements: United Arab Emirates Perspective and the 2005 Hague Convention on Choice of Court Agreements

Authors: Hasan Alrashid

Abstract:

The 2005 Hague Convention seeks to ensure legal certainty and predictability between parties in international business transactions. It harmonies exclusive choice of court agreements at the international level between parties to commercial transactions and to govern the recognition and enforcement of judgments resulting from proceedings based on such agreements to promote international trade and investment. Although the choice of court agreements is significant in international business transactions, the United Arab Emirates refuse to recognise it by Article 24 of the Federal Law No. 11 of 1992 of the Civil Procedure Code. A review of judicial judgments in United Arab Emirates up to the present day has revealed that several cases appeared before the Court in different states of United Arab Emirates regarding the recognition of exclusive choice of court agreements. In all the cases, the courts regarded the exclusive choice of court agreements as a direct assault on state authority and sovereignty and refused categorically to recognize choice of court agreements by refusing to stay proceedings in favor of the foreign chosen court. This has created uncertainty and unpredictability in international business transaction in the United Arab Emirates. In June 2011, the first Gulf Judicial Seminar on Cross-Frontier Legal Cooperation in Civil and Commercial Matters was held in Doha, Qatar. The Permanent Bureau of the Hague Conference attended the conference and invited the states of the Gulf Cooperation Council (GCC) namely, The United Arab Emirates, Bahrain, Saudi Arabia, Oman, Qatar and Kuwait to adopt some of the Hague Conventions, one of which was the Hague Convention on Choice of Court Agreements. One of the recommendations of the conference was that the GCC states should research ‘the benefits of predictability and legal certainty provided by the 2005 Convention on Choice of Court Agreements and its resulting advantages for cross-border trade and investment’ for possible adoption of the Hague Convention. Up to today, no further step has been taken by the any of the GCC states to adapt the Hague Convention nor did they conduct research on the benefits of predictability and legal certainty in international business transactions. This paper will argue that the approach regarding the recognition of choice of court agreements in United Arab Emirates states can be improved in order to help the parties in international business transactions avoid parallel litigation and ensure legal certainty and predictability. The focus will be the uncertainty and gaps regarding the choice of court agreements in the United Arab Emirates states. The Hague Convention on choice of court agreements and the importance of harmonisation of the rules of choice of court agreements at international level will also be discussed. Finally, The feasibility and desirability of recognizing choice of court agreements in United Arab Emirates legal system by becoming a party to the Hague Convention will be evaluated.

Keywords: choice of court agreements, party autonomy, public authority, sovereignty

Procedia PDF Downloads 246
112 Reliability of Clinical Coding in Accurately Estimating the Actual Prevalence of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Adverse drug event (ADE) related hospital admissions are common among older people. The first step in prevention is accurately estimating the prevalence of ADE admissions. Clinical coding is an efficient method to estimate the prevalence of ADE admissions. The objective of the study is to estimate the rate of under-coding of ADE admissions in older people in New Zealand and to explore how clinical coders decide whether or not to code an admission as an ADE. There has not been any research in New Zealand to explore these areas. This study is done using a mixed-methods approach. Two common and serious ADEs in older people, namely bleeding and hypoglycaemia were selected for the study. In study 1, eight hundred medical records of people aged 65 years and above who are admitted to hospital due to bleeding and hypoglycemia during the years 2015 – 2016 were selected for quantitative retrospective medical records review. This selection was made to estimate the proportion of ADE-related bleeding and hypoglycemia admissions that are not coded as ADEs. These files were reviewed and recorded as to whether the admission was caused by an ADE. The hospital discharge data were reviewed to check whether all the ADE admissions identified in the records review were coded as ADEs, and the proportion of under-coding of ADE admissions was estimated. In study 2, thirteen clinical coders were selected to conduct qualitative semi-structured interviews using a general inductive approach. Participants were selected purposively based on their experience in clinical coding. Interview questions were designed in a way to investigate the reasons for the under-coding of ADE admissions. The records review study showed that 35% (Cl 28% - 44%) of the ADE-related bleeding admissions and 22% of the ADE-related hypoglycemia admissions were not coded as ADEs. Although the quality of clinical coding is high across New Zealand, a substantial proportion of ADE admissions were under-coded. This shows that clinical coding might under-estimate the actual prevalence of ADE related hospital admissions in New Zealand. The interviews with the clinical coders added that lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing might be the potential reasons for the under-coding of the ADE admissions. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. These results highlight that further work is needed on interventions to improve the clinical coding of ADE admissions, such as providing education to coders about the importance of ADEs, education to clinicians about the importance of clear and confirmed medical records entries, availing pharmacist service to improve the detection and clear documentation of ADE admissions and including a mandatory field in the discharge summary about external causes of diseases.

Keywords: adverse drug events, bleeding, clinical coders, clinical coding, hypoglycemia

Procedia PDF Downloads 130
111 Geographic Origin Determination of Greek Rice (Oryza Sativa L.) Using Stable Isotopic Ratio Analysis

Authors: Anna-Akrivi Thomatou, Anastasios Zotos, Eleni C. Mazarakioti, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas

Abstract:

It is well known that accurate determination of geographic origin to confront mislabeling and adulteration of foods is considered as a critical issue worldwide not only for the consumers, but also for producers and industries. Among agricultural products, rice (Oryza sativa L.) is the world’s third largest crop, providing food for more than half of the world’s population. Consequently, the quality and safety of rice products play an important role in people’s life and health. Despite the fact that rice is predominantly produced in Asian countries, rice cultivation in Greece is of significant importance, contributing to national agricultural sector income. More than 25,000 acres are cultivated in Greece, while rice exports to other countries consist the 0,5% of the global rice trade. Although several techniques are available in order to provide information about the geographical origin of rice, little data exist regarding the ability of these methodologies to discriminate rice production from Greece. Thus, the aim of this study is the comparative evaluation of stable isotope ratio methodology regarding its discriminative ability for geographical origin determination of rice samples produced in Greece compared to those from three other Asian countries namely Korea, China and Philippines. In total eighty (80) samples were collected from selected fields of Central Macedonia (Greece), during October of 2021. The light element (C, N, S) isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS) and the results obtained were analyzed using chemometric techniques, including principal components analysis (PCA). Results indicated that the 𝜹 15N and 𝜹 34S values of rice produced in Greece were more markedly influenced by geographical origin compared to the 𝜹 13C. In particular, 𝜹 34S values in rice originating from Greece was -1.98 ± 1.71 compared to 2.10 ± 1.87, 4.41 ± 0.88 and 9.02 ± 0.75 for Korea, China and Philippines respectively. Among stable isotope ratios studied, values of 𝜹 34S seem to be the more appropriate isotope marker to discriminate rice geographic origin between the studied areas. These results imply the significant capability of stable isotope ratio methodology for effective geographical origin discrimination of rice, providing a valuable insight into the control of improper or fraudulent labeling. Acknowledgement: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.

Keywords: geographical origin, authenticity, rice, isotope ratio mass spectrometry

Procedia PDF Downloads 89
110 Regional Metamorphism of the Loki Crystalline Massif Allochthonous Complex of the Caucasus

Authors: David Shengelia, Giorgi Chichinadze, Tamara Tsutsunava, Giorgi Beridze, Irakli Javakhishvili

Abstract:

The Loki pre-Alpine crystalline massif crops out within the Caucasus region. The massif basement is represented by the Upper Devonian gneissose quartz-diorites, the Lower-Middle Paleozoic metamorphic allochthonous complex, and different magmatites. Earlier, the metamorphic complex was considered as indivisible set represented by the series of different temperature metamorphits. The degree of metamorphism of separate parts of the complex is due to different formation conditions. This fact according to authors of the abstract was explained by the allochthonous-flaky structure of the complex. It was stated that the complex thrust over the gneissose quartz diorites before the intrusion of Sudetic granites. During the detailed mapping, the authors turned out that the metamorphism issues need to be reviewed and additional researches to be carried out. Investigations were accomplished by using the following methodologies: finding of key sections, a sampling of rocks, microscopic description of the material, analytical determination of elements in the rocks, microprobe analysis of minerals and new interpretation of obtained data. According to the author’s recent data within the massif four tectonic plates: Lower Gorastskali, Sapharlo-Lok-Jandari, Moshevani and “mélange” overthrust sheets have been mapped. They differ from each other by composition, the degree of metamorphism and internal structure. It is confirmed that the initial rocks of the tectonic plates formed in different geodynamic conditions during overthrusting due to tectonic compression form a thick tectonic sheet. Based on the detailed laboratory investigations additional mineral assemblages were established, temperature limits were specified, and a renewed trend of metamorphism facies and subfacies was elaborated. The results are the following: 1. The Lower Gorastskali overthrust sheet is a fragment of ophiolitic association corresponding to the Paleotethys oceanic crust. The main rock-forming minerals are carbonate, chlorite, spinel, epidote, clinoptilolite, plagioclase, hornblende, actinolite, hornblende, albite, serpentine, tremolite, talc, garnet, and prehnite. Regional metamorphism of rocks corresponds to the greenschist facies lowest stage. 2. The Sapharlo-Lok-Jandari overthrust sheet metapelites are represented by chloritoid, chlorite, phengite, muscovite, biotite, garnet, ankerite, carbonate, and quartz. Metabasites containing actinolite, chlorite, plagioclase, calcite, epidote, albite, actinolitic hornblende and hornblende are also present. The degree of metamorphism corresponds to the greenschist high-temperature chlorite, biotite, and low-temperature garnet subfacies. Later the rocks underwent the contact influence of Late Variscan granites. 3. The Moshevani overthrust sheet is represented mainly by metapelites and rarely by metabasites. Main rock-forming minerals of metapelites are muscovite, biotite, chlorite, quartz, andalusite, plagioclase, garnet and cordierite and of metabasites - plagioclase, green and blue-green hornblende, chlorite, epidote, actinolite, albite, and carbonate. Metamorphism level corresponds to staurolite-andalusite subfacies of staurolite facies and partially to facies of biotite muscovite gneisses and hornfelse facies as well. 4. The “mélange” overthrust sheet is built of different size rock fragments and blocks of Moshevani and Lower Gorastskali overthrust sheets. The degree of regional metamorphism of first and second overthrust sheets of the Loki massif corresponds to chlorite, biotite, and low-temperature garnet subfacies, but of the third overthrust sheet – to staurolite-andalusite subfacies of staurolite facies and partially to facies of biotite muscovite gneisses and hornfelse facies.

Keywords: regional metamorphism, crystalline massif, mineral assemblages, the Caucasus

Procedia PDF Downloads 166
109 Examining Smallholder Farmers’ Perceptions of Climate Change and Barriers to Strategic Adaptation in Todee District, Liberia

Authors: Joe Dorbor Wuokolo

Abstract:

Thousands of smallholder farmers in Todee District, Montserrado county, are currently vulnerable to the negative impact of climate change. The district, which is the agricultural hot spot for the county, is faced with unfavorable changes in the daily temperature due to climate change. Farmers in the district have observed a dramatic change in the ratio of rainfall to sunshine, which has caused a chilling effect on their crop yields. However, there is a lack of documentation regarding how farmers perceive and respond to these changes and challenges. A study was conducted in the region to examine the perceptions of smallholder farmers regarding the negative impact of climate change, the adaptation strategies practice, and the barriers that hinder the process of advancing adaptation strategy. On purpose, a sample of 41 respondents from five towns was selected, including five town chiefs, five youth leaders, five women leaders, and sixteen community members. Women and youth leaders were specifically chosen to provide gender balance and enhance the quality of the investigation. Additionally, to validate the barriers farmers face during adaptation to climate change, this study interviewed eight experts from local and international organizations and government ministries and agencies involved in climate change and agricultural programs on what they perceived as the major barrier in both local and national level that impede farmers adaptation to climate change impact. SPSS was used to code the data, and descriptive statistics were used to analyze the data. The weighted average index (WAI) was used to rank adaptation strategies and the perceived importance of adaptation practices among farmers. On a scale from 0 to 3, 0 indicates the least important technique, and 3 indicates the most effective technique. In addition, the Problem Confrontation Index (PCI) was used to rank the barriers that prevented farmers from implementing adaptation measures. According to the findings, approximately 60% of all respondents considered the use of irrigation systems to be the most effective adaptation strategy, with drought-resistant varieties making up 30% of the total. Additionally, 80% of respondents placed a high value on drought-resistant varieties, while 63% percent placed it on irrigation practices. In addition, 78% of farmers ranked and indicated that unpredictability of the weather is the most significant barrier to their adaptation strategies, followed by the high cost of farm inputs and lack of access to financing facilities. 80% of respondents believe that the long-term changes in precipitation (rainfall) and temperature (hotness) are accelerating. This suggests that decision-makers should adopt policies and increase the capacity of smallholder farmers to adapt to the negative impact of climate change in order to ensure sustainable food production.

Keywords: adaptation strategies, climate change, farmers’ perception, smallholder farmers

Procedia PDF Downloads 82
108 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 159
107 Personalized Climate Change Advertising: The Role of Augmented Reality (A.R.) Technology in Encouraging Users for Climate Change Action

Authors: Mokhlisur Rahman

Abstract:

The growing consensus among scientists and world leaders indicates that immediate action should be considered regarding the climate change phenomenon. However, climate change is no more a global issue but a personal one. Thus, individual participation is necessary to address such a significant issue. Studies show that individuals who perceive climate change as a personal issue are more likely to act toward it. This abstract presents augmented reality (A.R.) technology in the social media platform Facebook video advertising. The idea involves creating a video advertisement that enables users to interact with the video by navigating its features and experiencing the result uniquely and engagingly. This advertisement uses A.R. to bring changes, such as people making changes in real-life scenarios by simple clicks on the video and hearing an instant rewarding fact about their choices. The video shows three options: room, lawn, and driveway. Users select one option and engage in interaction based on while holding the camera in their personal spaces: Suppose users select the first option, room, and hold their camera toward spots such as by the windows, balcony, corners, and even walls. In that case, the A.R. offers users different plants appropriate for those unoccupied spaces in the room. Users can change the options of the plants and see which space at their house deserves a plant that makes it more natural. When a user adds a natural element to the video, the video content explains a piece of beneficiary information about how the user contributes to the world more to be livable and why it is necessary. With the help of A.R., if users select the second option, lawn, and hold their camera toward their lawn, the options are various small trees for their lawn to make it more environmentally friendly and decorative. The video plays a beneficiary explanation here too. Suppose users select the third option, driveway, and hold their camera toward their driveway. In that case, the A.R. video option offers unique recycle bin designs using A.I. measurement of spaces. The video plays audio information on anthropogenic contribution to greenhouse gas emission. IoT embeds tracking code in the video ad on Facebook, which stores the exact number of views in the cloud for data analysis. An online survey at the end collects short qualitative answers. This study helps understand the number of users involved and willing to change their behavior; It makes personalized advertising in social media. Considering the current state of climate change, the urgency for action is increasing. This ad increases the chance to make direct connections with individuals and gives a sense of personal responsibility for climate change to act

Keywords: motivations, climate, iot, personalized-advertising, action

Procedia PDF Downloads 73
106 Harnessing Clinical Trial Capacity to Mitigate Zoonotic Diseases: The Role of Expert Scientists in Ethiopia

Authors: Senait Belay Adugna, Mirutse Giday, Tsegahun Manyazewal

Abstract:

Background: The emergence and resurgence of zoonotic diseases have continued to be a major threat to global health and the economy. Developing countries are particularly vulnerable due to agricultural expansions and the domestication of animals by humans. Scientifically sound clinical trials are important to find better ways to prevent, diagnose, and treat zoonotic diseases, while there is a lack of evidence to inform the clinical trials’ capacity and practice in countries highly affected by the diseases. This study aimed to investigate researchers’ perceptions and experiences in conducting clinical trials on zoonotic diseases in Ethiopia. Methods: This study employed a descriptive, qualitative study design. It included major academic and research institutions in Ethiopia that had active engagements in veterinary and public health research. It included the National Veterinary Institute, the National Animal Health Diagnostic and Investigation Center, the College of Veterinary Medicine at Addis Ababa University, the Ethiopian Public Health Institute, the Armauer Hansen Research Institute, and the College of Health Sciences at Addis Ababa University. In-depth interviews were conducted with 14 senior researcher investigators in the institutions who hold a proven exhibit primarily leading research activities or research units. Data were collected from October 2019 to April 2020. Data analysis was undertaken using open code 4.03 for qualitative data analysis. Results: Five major themes, with 18 sub-themes, emerged from the in-depth interview in connection. These were: challenges in the prevention, control, and treatment of zoonotic diseases; One Health approach to mitigate zoonotic diseases; personal and institutional experiences in conducting clinical trials on zoonotic diseases; barriers in conducting clinical trials towards zoonotic diseases; and strategies that promote conducting clinical trials on zoonotic diseases. Conducting clinical trials on zoonotic diseases in Ethiopia is hampered by a lack of clearly articulated ethics and regulatory frameworks, trial experts, financial resources, and good governance. Conclusions: In Ethiopia, conducting clinical trials on zoonotic diseases deserves due attention. Strengthening institutional and human resources capacity is a precondition to harnessing effective implementation of clinical trials on zoonotic diseases in the country. In Ethiopia, where skilled human resource is scarce, the One Health approach has the potential to form multidisciplinary teams to systematically improve clinical trials capacity and outcomes in the country.

Keywords: Ethiopia, clinical triak, zoonoses, disease

Procedia PDF Downloads 93
105 Sustainable Antimicrobial Biopolymeric Food & Biomedical Film Engineering Using Bioactive AMP-Ag+ Formulations

Authors: Eduardo Lanzagorta Garcia, Chaitra Venkatesh, Romina Pezzoli, Laura Gabriela Rodriguez Barroso, Declan Devine, Margaret E. Brennan Fournet

Abstract:

New antimicrobial interventions are urgently required to combat rising global health and medical infection challenges. Here, an innovative antimicrobial technology, providing price competitive alternatives to antibiotics and readily integratable with currently technological systems is presented. Two cutting edge antimicrobial materials, antimicrobial peptides (AMPs) and uncompromised sustained Ag+ action from triangular silver nanoplates (TSNPs) reservoirs, are merged for versatile effective antimicrobial action where current approaches fail. Antimicrobial peptides (AMPs) exist widely in nature and have recently been demonstrated for broad spectrum of activity against bacteria, viruses, and fungi. TSNP’s are highly discrete, homogenous and readily functionisable Ag+ nanoreseviors that have a proven amenability for operation within in a wide range of bio-based settings. In a design for advanced antimicrobial sustainable plastics, antimicrobial TSNPs are formulated for processing within biodegradable biopolymers. Histone H5 AMP was selected for its reported strong antimicrobial action and functionalized with the TSNP (AMP-TSNP) in a similar fashion to previously reported TSNP biofunctionalisation methods. A synergy between the propensity of biopolymers for degradation and Ag+ release combined with AMP activity provides a novel mechanism for the sustained antimicrobial action of biopolymeric thin films. Nanoplates are transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. Extrusion is used in combination with calendering rolls to create thin polymerc film where the nanoplates are embedded onto the surface. The resultant antibacterial functional films are suitable to be adapted for food packing and biomedical applications. TSNP synthesis were synthesized by adapting a previously reported seed mediated approach. TSNP synthesis was scaled up for litre scale batch production and subsequently concentrated to 43 ppm using thermally controlled H2O removal. Nanoplates were transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. This was acomplised by functionalizing the TSNP with thiol terminated polyethylene glycol and using centrifugal force to transfer them to chloroform. Polycaprolactone (PCL) and Polylactic acid (PLA) were individually processed through extrusion, TSNP and AMP-TSNP solutions were sprayed onto the polymer immediately after exiting the dye. Calendering rolls were used to disperse and incorporate TSNP and TSNP-AMP onto the surface of the extruded films. Observation of the characteristic blue colour confirms the integrity of the TSNP within the films. Antimicrobial tests were performed by incubating Gram + and Gram – strains with treated and non-treated films, to evaluate if bacterial growth was reduced due to the presence of the TSNP. The resulting films successfully incorporated TSNP and AMP-TSNP. Reduced bacterial growth was observed for both Gram + and Gram – strains for both TSNP and AMP-TSNP compared with untreated films indicating antimicrobial action. The largest growth reduction was observed for AMP-TSNP treated films demonstrating the additional antimicrobial activity due to the presence of the AMPs. The potential of this technology to impede bacterial activity in food industry and medical surfaces will forge new confidence in the battle against antibiotic resistant bacteria, serving to greatly inhibit infections and facilitate patient recovery.

Keywords: antimicrobial, biodegradable, peptide, polymer, nanoparticle

Procedia PDF Downloads 116
104 White-Rot Fungi Phellinus as a Source of Antioxidant and Antitumor Agents

Authors: Yogesh Dalvi, Ruby Varghese, Nibu Varghese, C. K. Krishnan Nair

Abstract:

Introduction: The Genus Phellinus, locally known as Phansomba is a well-known traditional folk medicine. Especially, in Western Ghats of India, many tribes use several species of Phellinus for various ailments related to teeth, throat, tongue, stomach and even wound healing. It is one of the few mushrooms which play a pivotal role in Ayurvedic Dravyaguna. Aim: The present study focuses on to investigate phytochemical analysis, antioxidant, and antitumor (in vitro and in vivo) potential of Phellinus robinae from South India, Kerala Material and Methods: The present study explores the following: 1. Phellinus samples were collected from Ranni, Pathanamthitta district of Kerala state, India from Artocarpus heterophyllus Lam. and species were identified using rDNA region. 2. The fruiting body was shadow dried, powdered and extracted with 50% alcohol using water bath at 60°C which was further condensed by rotary evaporator and lyophilized at minus 40°C temperature. 3. Secondary metabolites were analyzed by using various phytochemical screening assay (Hager’s Test, Wagner’s Test, Sodium hydroxide Test, Lead acetate Test, Ferric chloride Test, Folin-ciocalteu Test, Foaming Test, Benedict’s test, Fehling’s Test and Lowry’s Test). 4. Antioxidant and free radical scavenging activity were analyzed by DPPH, FRAP and Iron chelating assay. 5. The antitumor potential of Water alcohol extract of Phellinus (PAWE) is evaluated through In vitro condition by Trypan blue dye exclusion method in DLA cell line and In vivo by murine model. Result and Discussion: Preliminary phytochemical screening by various biochemical tests revealed presence of a variety of active secondary molecules like alkaloids, flavanoids, saponins, carbohydrate, protein and phenol. In DPPH and FRAP assay PAWE showed significantly higher antioxidant activity as compared to standard Ascorbic acid. While, in Iron chelating assay, PAWE exhibits similar antioxidant activity that of Butylated Hydroxytoluene (BHT) as standard. Further, in the in vitro study, PAWE showed significant inhibition on DLA cell proliferation in dose dependent manner and showed no toxicity on mice splenocytes, when compared to standard chemotherapy drug doxorubicin. In vivo study, oral administration of PAWE showed dose dependent tumor regression in mice and also raised the immunogenicity by restoring levels of antioxidant enzymes in liver and kidney tissue. In both in vitro and in vivo gene expression studies PAWE up-regulates pro-apoptotic genes (Bax, Caspases 3, 8 and 9) and down- regulates anti-apoptotic genes (Bcl2). PAWE also down regulates inflammatory gene (Cox-2) and angiogenic gene (VEGF). Conclusion: Preliminary phytochemical screening revealed that PAWE contains various secondary metabolites which contribute to its antioxidant and free radical scavenging property as evaluated by DPPH, FRAP and Iron chelating assay. PAWE exhibits anti-proliferative activity by the induction of apoptosis through a signaling cascade of death receptor-mediated extrinsic (Caspase8 and Tnf-α), as well as mitochondria-mediated intrinsic (caspase9) and caspase pathways (Caspase3, 8 and 9) and also by regressing angiogenic factor (VEGF) without any inflammation or adverse side effects. Hence, PAWE serve as a potential antioxidant and antitumor agent.

Keywords: antioxidant, antitumor, Dalton lymphoma ascites (DLA), fungi, Phellinus robinae

Procedia PDF Downloads 304
103 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger

Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez

Abstract:

One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.

Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation

Procedia PDF Downloads 147
102 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat

Authors: M. Venegas, M. De Vega, N. García-Hernando

Abstract:

Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.

Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy

Procedia PDF Downloads 285
101 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images

Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso

Abstract:

Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.

Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence

Procedia PDF Downloads 19
100 Jurisdiction Conflicts in Contracts of International Maritime Transport: The Application of the Forum Selection Clause in Brazilian Courts

Authors: Renan Caseiro De Almeida, Mateus Mello Garrute

Abstract:

The world walks to be ever more globalised. This trend promotes an increase on the number of transnational commercial transactions. The main modal for carriage of goods is by sea, and many countries have their economies dependent on the maritime freightage – it could be because they exercise largely this activity or because they follow the tendency of using the maritime logistic widely. Among these ones, Brazil is included. This nation counts with sixteen ports with good capacities, which receive most of the international income by sea. It is estimated that 85 per cent of the total influx of goods in Brazil is by maritime modal, leaving mere 15 per cent for the other ones. This made it necessary to develop maritime law in international and national basis, to create a standard to be applied with the intention to harmonize the transnational carriage of goods by sea. Maritime contracts are very specific and have interesting peculiarities, but in their range, little research has been made on what causes the main divergences when it comes to international contracts: the jurisdiction conflict. Likewise any other international contract, it is common for the parties to set a forum selection clause to choose the forum which will be able to judge the litigations that could rise from a maritime transport contract and, consequently, also which law should be applied to the cases. However, the forum choice in Brazil has always been somewhat polemical – not only in the maritime law sphere - for sometimes national tribunals overlook the parties’ choice and call the competence for themselves. In this sense, it is interesting to mention that the Mexico Convention of 1994 about the law applicable to international contracts did not gain strength in Brazil, nor even reached the Congress to be considered for ratification. Furthermore, it is also noteworthy that Brazil has a new Civil Procedure Code, which was put into reinforcement in 2016 bringing new legal provisions specifically about the forum selection. This represented a mark in the national legal system in this matter. Therefore, this paper intends to give an insight through Brazilian jurisprudence, making an analysis of how this issue has been treated on litigations about maritime contracts in the national tribunals, as well as the solutions found by the Brazilian legal system for the jurisdiction conflicts in those cases. To achieve the expected results, the hypothetical-deductive method will be used in combination with researches on doctrine and legislations. Also, jurisprudential research and case law study will have a special role, since the main point of this paper is to verify and study the position of the courts in Brazil in a specific matter. As a country of civil law, the Brazilian judges and tribunals are very attached to the rules displayed on codes. However, the jurisprudential understanding has been changing during the years and with the advent of the new rules about the applicable law and forum selection clause, it is noticeable that new winds are being blown.

Keywords: applicable law, forum selection clause, international business, international maritime contracts, litigation in courts

Procedia PDF Downloads 274
99 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 50
98 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis

Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero

Abstract:

Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.

Keywords: chemometrics, microNIR, microplastics, urban plastic waste

Procedia PDF Downloads 165
97 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 126
96 The Influence of Thermal Radiation and Chemical Reaction on MHD Micropolar Fluid in The Presence of Heat Generation/Absorption

Authors: Binyam Teferi

Abstract:

Numerical and theoretical analysis of mixed convection flow of magneto- hydrodynamics micropolar fluid with stretching capillary in the presence of thermal radiation, chemical reaction, viscous dissipation, and heat generation/ absorption have been studied. The non-linear partial differential equations of momentum, angular velocity, energy, and concentration are converted into ordinary differential equations using similarity transformations which can be solved numerically. The dimensionless governing equations are solved by using Runge Kutta fourth and fifth order along with the shooting method. The effect of physical parameters viz., micropolar parameter, unsteadiness parameter, thermal buoyancy parameter, concentration buoyancy parameter, Hartmann number, spin gradient viscosity parameter, microinertial density parameter, thermal radiation parameter, Prandtl number, Eckert number, heat generation or absorption parameter, Schmidt number and chemical reaction parameter on flow variables viz., the velocity of the micropolar fluid, microrotation, temperature, and concentration has been analyzed and discussed graphically. MATLAB code is used to analyze numerical and theoretical facts. From the simulation study, it can be concluded that an increment of micropolar parameter, Hartmann number, unsteadiness parameter, thermal and concentration buoyancy parameter results in decrement of velocity flow of micropolar fluid; microrotation of micropolar fluid decreases with an increment of micropolar parameter, unsteadiness parameter, microinertial density parameter, and spin gradient viscosity parameter; temperature profile of micropolar fluid decreases with an increment of thermal radiation parameter, Prandtl number, micropolar parameter, unsteadiness parameter, heat absorption, and viscous dissipation parameter; concentration of micropolar fluid decreases as unsteadiness parameter, Schmidt number and chemical reaction parameter increases. Furthermore, computational values of local skin friction coefficient, local wall coupled coefficient, local Nusselt number, and local Sherwood number for different values of parameters have been investigated. In this paper, the following important results are obtained; An increment of micropolar parameter and Hartmann number results in a decrement of velocity flow of micropolar fluid. Microrotation decreases with an increment of the microinertial density parameter. Temperature decreases with an increasing value of the thermal radiation parameter and viscous dissipation parameter. Concentration decreases as the values of Schmidt number and chemical reaction parameter increases. The coefficient of local skin friction is enhanced with an increase in values of both the unsteadiness parameter and micropolar parameter. Increasing values of unsteadiness parameter and micropolar parameter results in an increment of the local couple stress. An increment of values of unsteadiness parameter and thermal radiation parameter results in an increment of the rate of heat transfer. As the values of Schmidt number and unsteadiness parameter increases, Sherwood number decreases.

Keywords: thermal radiation, chemical reaction, viscous dissipation, heat absorption/ generation, similarity transformation

Procedia PDF Downloads 127
95 Hygro-Thermal Modelling of Timber Decks

Authors: Stefania Fortino, Petr Hradil, Timo Avikainen

Abstract:

Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.

Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM

Procedia PDF Downloads 175
94 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 117
93 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure

Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati

Abstract:

The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.

Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure

Procedia PDF Downloads 341
92 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint

Authors: Juliane Spaak

Abstract:

A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.

Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient

Procedia PDF Downloads 73
91 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4

Authors: Ryan A. Black, Stacey A. McCaffrey

Abstract:

Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.

Keywords: instrument development, item response theory, latent trait theory, psychometrics

Procedia PDF Downloads 356
90 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy

Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny

Abstract:

Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.

Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy

Procedia PDF Downloads 87
89 Sentiment Analysis on University Students’ Evaluation of Teaching and Their Emotional Engagement

Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís

Abstract:

Teaching practices have been widely studied in relation to students' outcomes, positioning themselves as one of their strongest catalysts and influencing students' emotional experiences. In the higher education context, teachers become even more crucial as many students ground their decisions on which courses to enroll in based on opinions and ratings of teachers from other students. Unfortunately, sometimes universities do not provide the personal, social, and academic stimulation students demand to be actively engaged. To evaluate their teachers, universities often rely on students' evaluations of teaching (SET) collected via Likert scale surveys. Despite its usefulness, such a method has been questioned in terms of validity and reliability. Alternatively, researchers can rely on qualitative answers to open-ended questions. However, the unstructured nature of the answers and a large amount of information obtained requires an overwhelming amount of work. The present work presents an alternative approach to analyse such data: sentiment analysis. To the best of our knowledge, no research before has included results from SA into an explanatory model to test how students' sentiments affect their emotional engagement in class. The sample of the present study included a total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) from the Educational Sciences faculty of a public university in Spain. Data collection took place during the academic year 2021-2022. Students accessed an online questionnaire using a QR code. They were asked to answer the following open-ended question: "If you had to explain to a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?". Sentiment analysis was performed using Microsoft's pre-trained model. The reliability of the measure was estimated between the tool and one of the researchers who coded all answers independently. The Cohen's kappa and the average pairwise percent agreement were estimated with ReCal2. Cohen's kappa was .68, and the agreement reached was 90.8%, both considered satisfactory. To test the hypothesis relations among SA and students' emotional engagement, a structural equation model (SEM) was estimated. Results demonstrated a good fit of the data: RMSEA = .04, SRMR = .03, TLI = .99, CFI = .99. Specifically, the results showed that student’s sentiment regarding their teachers’ teaching positively predicted their emotional engagement (β == .16 [.02, -.30]). In other words, when students' opinion toward their instructors' teaching practices is positive, it is more likely for students to engage emotionally in the subject. Altogether, the results show a promising future for sentiment analysis techniques in the field of education. They suggest the usefulness of this tool when evaluating relations among teaching practices and student outcomes.

Keywords: sentiment analysis, students' evaluation of teaching, structural-equation modelling, emotional engagement

Procedia PDF Downloads 84
88 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator

Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani

Abstract:

During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).

Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA

Procedia PDF Downloads 186
87 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.

Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation

Procedia PDF Downloads 142
86 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column

Authors: G. Rajapakse, S. Jayasinghe, A. Fleming

Abstract:

This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.

Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter

Procedia PDF Downloads 113