Search results for: code composer studio
115 Nuclear Materials and Nuclear Security in India: A Brief Overview
Authors: Debalina Ghoshal
Abstract:
Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.Keywords: India, nuclear security, nuclear materials, non proliferation
Procedia PDF Downloads 351114 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 235113 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 203112 Nonlinear Response of Tall Reinforced Concrete Shear Wall Buildings under Wind Loads
Authors: Mahtab Abdollahi Sarvi, Siamak Epackachi, Ali Imanpour
Abstract:
Reinforced concrete shear walls are commonly used as the lateral load-resisting system of mid- to high-rise office or residential buildings around the world. Design of such systems is often governed by wind rather than seismic effects, in particular in low-to-moderate seismic regions. The current design philosophy as per the majority of building codes under wind loads require elastic response of lateral load-resisting systems including reinforced concrete shear walls when subjected to the rare design wind load, resulting in significantly large wall sections needed to meet strength requirements and drift limits. The latter can highly influence the design in upper stories due to stringent drift limits specified by building codes, leading to substantial added costs to the construction of the wall. However, such walls may offer limited to moderate over-strength and ductility due to their large reserve capacity provided that they are designed and detailed to appropriately develop such over-strength and ductility under extreme wind loads. This would significantly contribute to reducing construction time and costs, while maintaining structural integrity under gravity and frequently-occurring and less frequent wind events. This paper aims to investigate the over-strength and ductility capacity of several imaginary office buildings located in Edmonton, Canada with a glance at earthquake design philosophy. Selected models are 10- to 25-story buildings with three types of reinforced concrete shear wall configurations including rectangular, barbell, and flanged. The buildings are designed according to National Building Code of Canada. Then fiber-based numerical models of the walls are developed in Perform 3D and by conducting nonlinear static (pushover) analysis, lateral nonlinear behavior of the walls are evaluated. Ductility and over-strength of the structures are obtained based on the results of the pushover analyses. The results confirmed moderate nonlinear capacity of reinforced concrete shear walls under extreme wind loads. This is while lateral displacements of the walls pass the serviceability limit states defined in Pre standard for Performance-Based Wind Design (ASCE). The results indicate that we can benefit the limited nonlinear response observed in the reinforced concrete shear walls to economize the design of such systems under wind loads.Keywords: concrete shear wall, high-rise buildings, nonlinear static analysis, response modification factor, wind load
Procedia PDF Downloads 105111 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads
Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon
Abstract:
The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads
Procedia PDF Downloads 270110 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows
Authors: D. M. Lewis, L. Frisby, U. Stead
Abstract:
Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations
Procedia PDF Downloads 100109 Disclosure on Adherence of the King Code's Audit Committee Guidance: Cluster Analyses to Determine Strengths and Weaknesses
Authors: Philna Coetzee, Clara Msiza
Abstract:
In modern society, audit committees are seen as the custodians of accountability and the conscience of management and the board. But who holds the audit committee accountable for their actions or non-actions and how do we know what they are supposed to be doing and what they are doing? The purpose of this article is to provide greater insight into the latter part of this problem, namely, determine what best practises for audit committees and the disclosure of what is the realities are. In countries where governance is well established, the roles and responsibilities of the audit committee are mostly clearly guided by legislation and/or guidance documents, with countries increasingly providing guidance on this topic. With high cost involved to adhere to governance guidelines, the public (for public organisations) and shareholders (for private organisations) expect to see the value of their ‘investment’. For audit committees, the dividends on the investment should reflect in less fraudulent activities, less corruption, higher efficiency and effectiveness, improved social and environmental impact, and increased profits, to name a few. If this is not the case (which is reflected in the number of fraudulent activities in both the private and the public sector), stakeholders have the right to ask: where was the audit committee? Therefore, the objective of this article is to contribute to the body of knowledge by comparing the adherence of audit committee to best practices guidelines as stipulated in the King Report across public listed companies, national and provincial government departments, state-owned enterprises and local municipalities. After constructs were formed, based on the literature, factor analyses were conducted to reduce the number of variables in each construct. Thereafter, cluster analyses, which is an explorative analysis technique that classifies a set of objects in such a way that objects that are more similar are grouped into the same group, were conducted. The SPSS TwoStep Clustering Component was used, being capable of handling both continuous and categorical variables. In the first step, a pre-clustering procedure clusters the objects into small sub-clusters, after which it clusters these sub-clusters into the desired number of clusters. The cluster analyses were conducted for each construct and the measure, namely the audit opinion as listed in the external audit report, were included. Analysing 228 organisations' information, the results indicate that there is a clear distinction between the four spheres of business that has been included in the analyses, indicating certain strengths and certain weaknesses within each sphere. The results may provide the overseers of audit committees’ insight into where a specific sector’s strengths and weaknesses lie. Audit committee chairs will be able to improve the areas where their audit committee is lacking behind. The strengthening of audit committees should result in an improvement of the accountability of boards, leading to less fraud and corruption.Keywords: audit committee disclosure, cluster analyses, governance best practices, strengths and weaknesses
Procedia PDF Downloads 166108 The Recognition of Exclusive Choice of Court Agreements: United Arab Emirates Perspective and the 2005 Hague Convention on Choice of Court Agreements
Authors: Hasan Alrashid
Abstract:
The 2005 Hague Convention seeks to ensure legal certainty and predictability between parties in international business transactions. It harmonies exclusive choice of court agreements at the international level between parties to commercial transactions and to govern the recognition and enforcement of judgments resulting from proceedings based on such agreements to promote international trade and investment. Although the choice of court agreements is significant in international business transactions, the United Arab Emirates refuse to recognise it by Article 24 of the Federal Law No. 11 of 1992 of the Civil Procedure Code. A review of judicial judgments in United Arab Emirates up to the present day has revealed that several cases appeared before the Court in different states of United Arab Emirates regarding the recognition of exclusive choice of court agreements. In all the cases, the courts regarded the exclusive choice of court agreements as a direct assault on state authority and sovereignty and refused categorically to recognize choice of court agreements by refusing to stay proceedings in favor of the foreign chosen court. This has created uncertainty and unpredictability in international business transaction in the United Arab Emirates. In June 2011, the first Gulf Judicial Seminar on Cross-Frontier Legal Cooperation in Civil and Commercial Matters was held in Doha, Qatar. The Permanent Bureau of the Hague Conference attended the conference and invited the states of the Gulf Cooperation Council (GCC) namely, The United Arab Emirates, Bahrain, Saudi Arabia, Oman, Qatar and Kuwait to adopt some of the Hague Conventions, one of which was the Hague Convention on Choice of Court Agreements. One of the recommendations of the conference was that the GCC states should research ‘the benefits of predictability and legal certainty provided by the 2005 Convention on Choice of Court Agreements and its resulting advantages for cross-border trade and investment’ for possible adoption of the Hague Convention. Up to today, no further step has been taken by the any of the GCC states to adapt the Hague Convention nor did they conduct research on the benefits of predictability and legal certainty in international business transactions. This paper will argue that the approach regarding the recognition of choice of court agreements in United Arab Emirates states can be improved in order to help the parties in international business transactions avoid parallel litigation and ensure legal certainty and predictability. The focus will be the uncertainty and gaps regarding the choice of court agreements in the United Arab Emirates states. The Hague Convention on choice of court agreements and the importance of harmonisation of the rules of choice of court agreements at international level will also be discussed. Finally, The feasibility and desirability of recognizing choice of court agreements in United Arab Emirates legal system by becoming a party to the Hague Convention will be evaluated.Keywords: choice of court agreements, party autonomy, public authority, sovereignty
Procedia PDF Downloads 246107 Reliability of Clinical Coding in Accurately Estimating the Actual Prevalence of Adverse Drug Event Admissions
Authors: Nisa Mohan
Abstract:
Adverse drug event (ADE) related hospital admissions are common among older people. The first step in prevention is accurately estimating the prevalence of ADE admissions. Clinical coding is an efficient method to estimate the prevalence of ADE admissions. The objective of the study is to estimate the rate of under-coding of ADE admissions in older people in New Zealand and to explore how clinical coders decide whether or not to code an admission as an ADE. There has not been any research in New Zealand to explore these areas. This study is done using a mixed-methods approach. Two common and serious ADEs in older people, namely bleeding and hypoglycaemia were selected for the study. In study 1, eight hundred medical records of people aged 65 years and above who are admitted to hospital due to bleeding and hypoglycemia during the years 2015 – 2016 were selected for quantitative retrospective medical records review. This selection was made to estimate the proportion of ADE-related bleeding and hypoglycemia admissions that are not coded as ADEs. These files were reviewed and recorded as to whether the admission was caused by an ADE. The hospital discharge data were reviewed to check whether all the ADE admissions identified in the records review were coded as ADEs, and the proportion of under-coding of ADE admissions was estimated. In study 2, thirteen clinical coders were selected to conduct qualitative semi-structured interviews using a general inductive approach. Participants were selected purposively based on their experience in clinical coding. Interview questions were designed in a way to investigate the reasons for the under-coding of ADE admissions. The records review study showed that 35% (Cl 28% - 44%) of the ADE-related bleeding admissions and 22% of the ADE-related hypoglycemia admissions were not coded as ADEs. Although the quality of clinical coding is high across New Zealand, a substantial proportion of ADE admissions were under-coded. This shows that clinical coding might under-estimate the actual prevalence of ADE related hospital admissions in New Zealand. The interviews with the clinical coders added that lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing might be the potential reasons for the under-coding of the ADE admissions. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. These results highlight that further work is needed on interventions to improve the clinical coding of ADE admissions, such as providing education to coders about the importance of ADEs, education to clinicians about the importance of clear and confirmed medical records entries, availing pharmacist service to improve the detection and clear documentation of ADE admissions and including a mandatory field in the discharge summary about external causes of diseases.Keywords: adverse drug events, bleeding, clinical coders, clinical coding, hypoglycemia
Procedia PDF Downloads 130106 Geographic Origin Determination of Greek Rice (Oryza Sativa L.) Using Stable Isotopic Ratio Analysis
Authors: Anna-Akrivi Thomatou, Anastasios Zotos, Eleni C. Mazarakioti, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas
Abstract:
It is well known that accurate determination of geographic origin to confront mislabeling and adulteration of foods is considered as a critical issue worldwide not only for the consumers, but also for producers and industries. Among agricultural products, rice (Oryza sativa L.) is the world’s third largest crop, providing food for more than half of the world’s population. Consequently, the quality and safety of rice products play an important role in people’s life and health. Despite the fact that rice is predominantly produced in Asian countries, rice cultivation in Greece is of significant importance, contributing to national agricultural sector income. More than 25,000 acres are cultivated in Greece, while rice exports to other countries consist the 0,5% of the global rice trade. Although several techniques are available in order to provide information about the geographical origin of rice, little data exist regarding the ability of these methodologies to discriminate rice production from Greece. Thus, the aim of this study is the comparative evaluation of stable isotope ratio methodology regarding its discriminative ability for geographical origin determination of rice samples produced in Greece compared to those from three other Asian countries namely Korea, China and Philippines. In total eighty (80) samples were collected from selected fields of Central Macedonia (Greece), during October of 2021. The light element (C, N, S) isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS) and the results obtained were analyzed using chemometric techniques, including principal components analysis (PCA). Results indicated that the 𝜹 15N and 𝜹 34S values of rice produced in Greece were more markedly influenced by geographical origin compared to the 𝜹 13C. In particular, 𝜹 34S values in rice originating from Greece was -1.98 ± 1.71 compared to 2.10 ± 1.87, 4.41 ± 0.88 and 9.02 ± 0.75 for Korea, China and Philippines respectively. Among stable isotope ratios studied, values of 𝜹 34S seem to be the more appropriate isotope marker to discriminate rice geographic origin between the studied areas. These results imply the significant capability of stable isotope ratio methodology for effective geographical origin discrimination of rice, providing a valuable insight into the control of improper or fraudulent labeling. Acknowledgement: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.Keywords: geographical origin, authenticity, rice, isotope ratio mass spectrometry
Procedia PDF Downloads 86105 Examining Smallholder Farmers’ Perceptions of Climate Change and Barriers to Strategic Adaptation in Todee District, Liberia
Authors: Joe Dorbor Wuokolo
Abstract:
Thousands of smallholder farmers in Todee District, Montserrado county, are currently vulnerable to the negative impact of climate change. The district, which is the agricultural hot spot for the county, is faced with unfavorable changes in the daily temperature due to climate change. Farmers in the district have observed a dramatic change in the ratio of rainfall to sunshine, which has caused a chilling effect on their crop yields. However, there is a lack of documentation regarding how farmers perceive and respond to these changes and challenges. A study was conducted in the region to examine the perceptions of smallholder farmers regarding the negative impact of climate change, the adaptation strategies practice, and the barriers that hinder the process of advancing adaptation strategy. On purpose, a sample of 41 respondents from five towns was selected, including five town chiefs, five youth leaders, five women leaders, and sixteen community members. Women and youth leaders were specifically chosen to provide gender balance and enhance the quality of the investigation. Additionally, to validate the barriers farmers face during adaptation to climate change, this study interviewed eight experts from local and international organizations and government ministries and agencies involved in climate change and agricultural programs on what they perceived as the major barrier in both local and national level that impede farmers adaptation to climate change impact. SPSS was used to code the data, and descriptive statistics were used to analyze the data. The weighted average index (WAI) was used to rank adaptation strategies and the perceived importance of adaptation practices among farmers. On a scale from 0 to 3, 0 indicates the least important technique, and 3 indicates the most effective technique. In addition, the Problem Confrontation Index (PCI) was used to rank the barriers that prevented farmers from implementing adaptation measures. According to the findings, approximately 60% of all respondents considered the use of irrigation systems to be the most effective adaptation strategy, with drought-resistant varieties making up 30% of the total. Additionally, 80% of respondents placed a high value on drought-resistant varieties, while 63% percent placed it on irrigation practices. In addition, 78% of farmers ranked and indicated that unpredictability of the weather is the most significant barrier to their adaptation strategies, followed by the high cost of farm inputs and lack of access to financing facilities. 80% of respondents believe that the long-term changes in precipitation (rainfall) and temperature (hotness) are accelerating. This suggests that decision-makers should adopt policies and increase the capacity of smallholder farmers to adapt to the negative impact of climate change in order to ensure sustainable food production.Keywords: adaptation strategies, climate change, farmers’ perception, smallholder farmers
Procedia PDF Downloads 81104 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 158103 Personalized Climate Change Advertising: The Role of Augmented Reality (A.R.) Technology in Encouraging Users for Climate Change Action
Authors: Mokhlisur Rahman
Abstract:
The growing consensus among scientists and world leaders indicates that immediate action should be considered regarding the climate change phenomenon. However, climate change is no more a global issue but a personal one. Thus, individual participation is necessary to address such a significant issue. Studies show that individuals who perceive climate change as a personal issue are more likely to act toward it. This abstract presents augmented reality (A.R.) technology in the social media platform Facebook video advertising. The idea involves creating a video advertisement that enables users to interact with the video by navigating its features and experiencing the result uniquely and engagingly. This advertisement uses A.R. to bring changes, such as people making changes in real-life scenarios by simple clicks on the video and hearing an instant rewarding fact about their choices. The video shows three options: room, lawn, and driveway. Users select one option and engage in interaction based on while holding the camera in their personal spaces: Suppose users select the first option, room, and hold their camera toward spots such as by the windows, balcony, corners, and even walls. In that case, the A.R. offers users different plants appropriate for those unoccupied spaces in the room. Users can change the options of the plants and see which space at their house deserves a plant that makes it more natural. When a user adds a natural element to the video, the video content explains a piece of beneficiary information about how the user contributes to the world more to be livable and why it is necessary. With the help of A.R., if users select the second option, lawn, and hold their camera toward their lawn, the options are various small trees for their lawn to make it more environmentally friendly and decorative. The video plays a beneficiary explanation here too. Suppose users select the third option, driveway, and hold their camera toward their driveway. In that case, the A.R. video option offers unique recycle bin designs using A.I. measurement of spaces. The video plays audio information on anthropogenic contribution to greenhouse gas emission. IoT embeds tracking code in the video ad on Facebook, which stores the exact number of views in the cloud for data analysis. An online survey at the end collects short qualitative answers. This study helps understand the number of users involved and willing to change their behavior; It makes personalized advertising in social media. Considering the current state of climate change, the urgency for action is increasing. This ad increases the chance to make direct connections with individuals and gives a sense of personal responsibility for climate change to actKeywords: motivations, climate, iot, personalized-advertising, action
Procedia PDF Downloads 72102 Harnessing Clinical Trial Capacity to Mitigate Zoonotic Diseases: The Role of Expert Scientists in Ethiopia
Authors: Senait Belay Adugna, Mirutse Giday, Tsegahun Manyazewal
Abstract:
Background: The emergence and resurgence of zoonotic diseases have continued to be a major threat to global health and the economy. Developing countries are particularly vulnerable due to agricultural expansions and the domestication of animals by humans. Scientifically sound clinical trials are important to find better ways to prevent, diagnose, and treat zoonotic diseases, while there is a lack of evidence to inform the clinical trials’ capacity and practice in countries highly affected by the diseases. This study aimed to investigate researchers’ perceptions and experiences in conducting clinical trials on zoonotic diseases in Ethiopia. Methods: This study employed a descriptive, qualitative study design. It included major academic and research institutions in Ethiopia that had active engagements in veterinary and public health research. It included the National Veterinary Institute, the National Animal Health Diagnostic and Investigation Center, the College of Veterinary Medicine at Addis Ababa University, the Ethiopian Public Health Institute, the Armauer Hansen Research Institute, and the College of Health Sciences at Addis Ababa University. In-depth interviews were conducted with 14 senior researcher investigators in the institutions who hold a proven exhibit primarily leading research activities or research units. Data were collected from October 2019 to April 2020. Data analysis was undertaken using open code 4.03 for qualitative data analysis. Results: Five major themes, with 18 sub-themes, emerged from the in-depth interview in connection. These were: challenges in the prevention, control, and treatment of zoonotic diseases; One Health approach to mitigate zoonotic diseases; personal and institutional experiences in conducting clinical trials on zoonotic diseases; barriers in conducting clinical trials towards zoonotic diseases; and strategies that promote conducting clinical trials on zoonotic diseases. Conducting clinical trials on zoonotic diseases in Ethiopia is hampered by a lack of clearly articulated ethics and regulatory frameworks, trial experts, financial resources, and good governance. Conclusions: In Ethiopia, conducting clinical trials on zoonotic diseases deserves due attention. Strengthening institutional and human resources capacity is a precondition to harnessing effective implementation of clinical trials on zoonotic diseases in the country. In Ethiopia, where skilled human resource is scarce, the One Health approach has the potential to form multidisciplinary teams to systematically improve clinical trials capacity and outcomes in the country.Keywords: Ethiopia, clinical triak, zoonoses, disease
Procedia PDF Downloads 91101 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger
Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez
Abstract:
One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation
Procedia PDF Downloads 144100 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 28399 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 1698 Jurisdiction Conflicts in Contracts of International Maritime Transport: The Application of the Forum Selection Clause in Brazilian Courts
Authors: Renan Caseiro De Almeida, Mateus Mello Garrute
Abstract:
The world walks to be ever more globalised. This trend promotes an increase on the number of transnational commercial transactions. The main modal for carriage of goods is by sea, and many countries have their economies dependent on the maritime freightage – it could be because they exercise largely this activity or because they follow the tendency of using the maritime logistic widely. Among these ones, Brazil is included. This nation counts with sixteen ports with good capacities, which receive most of the international income by sea. It is estimated that 85 per cent of the total influx of goods in Brazil is by maritime modal, leaving mere 15 per cent for the other ones. This made it necessary to develop maritime law in international and national basis, to create a standard to be applied with the intention to harmonize the transnational carriage of goods by sea. Maritime contracts are very specific and have interesting peculiarities, but in their range, little research has been made on what causes the main divergences when it comes to international contracts: the jurisdiction conflict. Likewise any other international contract, it is common for the parties to set a forum selection clause to choose the forum which will be able to judge the litigations that could rise from a maritime transport contract and, consequently, also which law should be applied to the cases. However, the forum choice in Brazil has always been somewhat polemical – not only in the maritime law sphere - for sometimes national tribunals overlook the parties’ choice and call the competence for themselves. In this sense, it is interesting to mention that the Mexico Convention of 1994 about the law applicable to international contracts did not gain strength in Brazil, nor even reached the Congress to be considered for ratification. Furthermore, it is also noteworthy that Brazil has a new Civil Procedure Code, which was put into reinforcement in 2016 bringing new legal provisions specifically about the forum selection. This represented a mark in the national legal system in this matter. Therefore, this paper intends to give an insight through Brazilian jurisprudence, making an analysis of how this issue has been treated on litigations about maritime contracts in the national tribunals, as well as the solutions found by the Brazilian legal system for the jurisdiction conflicts in those cases. To achieve the expected results, the hypothetical-deductive method will be used in combination with researches on doctrine and legislations. Also, jurisprudential research and case law study will have a special role, since the main point of this paper is to verify and study the position of the courts in Brazil in a specific matter. As a country of civil law, the Brazilian judges and tribunals are very attached to the rules displayed on codes. However, the jurisprudential understanding has been changing during the years and with the advent of the new rules about the applicable law and forum selection clause, it is noticeable that new winds are being blown.Keywords: applicable law, forum selection clause, international business, international maritime contracts, litigation in courts
Procedia PDF Downloads 27397 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago
Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu
Abstract:
Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago
Procedia PDF Downloads 4896 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 16395 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective
Authors: Pardis Moslemzadeh Tehrani
Abstract:
Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.Keywords: blockchain, supply chain, IoT, smart contract
Procedia PDF Downloads 12594 The Influence of Thermal Radiation and Chemical Reaction on MHD Micropolar Fluid in The Presence of Heat Generation/Absorption
Authors: Binyam Teferi
Abstract:
Numerical and theoretical analysis of mixed convection flow of magneto- hydrodynamics micropolar fluid with stretching capillary in the presence of thermal radiation, chemical reaction, viscous dissipation, and heat generation/ absorption have been studied. The non-linear partial differential equations of momentum, angular velocity, energy, and concentration are converted into ordinary differential equations using similarity transformations which can be solved numerically. The dimensionless governing equations are solved by using Runge Kutta fourth and fifth order along with the shooting method. The effect of physical parameters viz., micropolar parameter, unsteadiness parameter, thermal buoyancy parameter, concentration buoyancy parameter, Hartmann number, spin gradient viscosity parameter, microinertial density parameter, thermal radiation parameter, Prandtl number, Eckert number, heat generation or absorption parameter, Schmidt number and chemical reaction parameter on flow variables viz., the velocity of the micropolar fluid, microrotation, temperature, and concentration has been analyzed and discussed graphically. MATLAB code is used to analyze numerical and theoretical facts. From the simulation study, it can be concluded that an increment of micropolar parameter, Hartmann number, unsteadiness parameter, thermal and concentration buoyancy parameter results in decrement of velocity flow of micropolar fluid; microrotation of micropolar fluid decreases with an increment of micropolar parameter, unsteadiness parameter, microinertial density parameter, and spin gradient viscosity parameter; temperature profile of micropolar fluid decreases with an increment of thermal radiation parameter, Prandtl number, micropolar parameter, unsteadiness parameter, heat absorption, and viscous dissipation parameter; concentration of micropolar fluid decreases as unsteadiness parameter, Schmidt number and chemical reaction parameter increases. Furthermore, computational values of local skin friction coefficient, local wall coupled coefficient, local Nusselt number, and local Sherwood number for different values of parameters have been investigated. In this paper, the following important results are obtained; An increment of micropolar parameter and Hartmann number results in a decrement of velocity flow of micropolar fluid. Microrotation decreases with an increment of the microinertial density parameter. Temperature decreases with an increasing value of the thermal radiation parameter and viscous dissipation parameter. Concentration decreases as the values of Schmidt number and chemical reaction parameter increases. The coefficient of local skin friction is enhanced with an increase in values of both the unsteadiness parameter and micropolar parameter. Increasing values of unsteadiness parameter and micropolar parameter results in an increment of the local couple stress. An increment of values of unsteadiness parameter and thermal radiation parameter results in an increment of the rate of heat transfer. As the values of Schmidt number and unsteadiness parameter increases, Sherwood number decreases.Keywords: thermal radiation, chemical reaction, viscous dissipation, heat absorption/ generation, similarity transformation
Procedia PDF Downloads 12593 Hygro-Thermal Modelling of Timber Decks
Authors: Stefania Fortino, Petr Hradil, Timo Avikainen
Abstract:
Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM
Procedia PDF Downloads 17492 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees
Authors: Alexandru-Ion Marinescu
Abstract:
There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution
Procedia PDF Downloads 11691 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure
Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati
Abstract:
The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure
Procedia PDF Downloads 34090 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint
Authors: Juliane Spaak
Abstract:
A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient
Procedia PDF Downloads 7289 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 35688 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy
Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny
Abstract:
Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy
Procedia PDF Downloads 8587 Sentiment Analysis on University Students’ Evaluation of Teaching and Their Emotional Engagement
Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís
Abstract:
Teaching practices have been widely studied in relation to students' outcomes, positioning themselves as one of their strongest catalysts and influencing students' emotional experiences. In the higher education context, teachers become even more crucial as many students ground their decisions on which courses to enroll in based on opinions and ratings of teachers from other students. Unfortunately, sometimes universities do not provide the personal, social, and academic stimulation students demand to be actively engaged. To evaluate their teachers, universities often rely on students' evaluations of teaching (SET) collected via Likert scale surveys. Despite its usefulness, such a method has been questioned in terms of validity and reliability. Alternatively, researchers can rely on qualitative answers to open-ended questions. However, the unstructured nature of the answers and a large amount of information obtained requires an overwhelming amount of work. The present work presents an alternative approach to analyse such data: sentiment analysis. To the best of our knowledge, no research before has included results from SA into an explanatory model to test how students' sentiments affect their emotional engagement in class. The sample of the present study included a total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) from the Educational Sciences faculty of a public university in Spain. Data collection took place during the academic year 2021-2022. Students accessed an online questionnaire using a QR code. They were asked to answer the following open-ended question: "If you had to explain to a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?". Sentiment analysis was performed using Microsoft's pre-trained model. The reliability of the measure was estimated between the tool and one of the researchers who coded all answers independently. The Cohen's kappa and the average pairwise percent agreement were estimated with ReCal2. Cohen's kappa was .68, and the agreement reached was 90.8%, both considered satisfactory. To test the hypothesis relations among SA and students' emotional engagement, a structural equation model (SEM) was estimated. Results demonstrated a good fit of the data: RMSEA = .04, SRMR = .03, TLI = .99, CFI = .99. Specifically, the results showed that student’s sentiment regarding their teachers’ teaching positively predicted their emotional engagement (β == .16 [.02, -.30]). In other words, when students' opinion toward their instructors' teaching practices is positive, it is more likely for students to engage emotionally in the subject. Altogether, the results show a promising future for sentiment analysis techniques in the field of education. They suggest the usefulness of this tool when evaluating relations among teaching practices and student outcomes.Keywords: sentiment analysis, students' evaluation of teaching, structural-equation modelling, emotional engagement
Procedia PDF Downloads 8286 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator
Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani
Abstract:
During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA
Procedia PDF Downloads 184