Search results for: ecological modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5027

Search results for: ecological modeling

587 Ecosystem Engineering Strengthens Bottom-Up and Weakens Top-Down Effects via Trait-Mediated Indirect Interactions

Authors: Zhiwei Zhong, Xiaofei Li, Deli Wang

Abstract:

Ecosystem engineering is a powerful force shaping community structure and ecosystem function. Yet, very little is known about the mechanisms by which engineers affect vital ecosystem processes like trophic interactions. Here, we examine the potential for a herbivore ecosystem engineer, domestic sheep, to affect trophic interactions between the web-building spider Argiope bruennichi, its grasshopper prey Euchorthippus spp., and the grasshoppers’ host plant Leymus chinensis. By integrating small- and large-scale field experiments, we demonstrate that: 1) moderate sheep grazing changed the structure of plant communities by suppressing strongly interacting forbs within a grassland matrix; 2) this change in plant community structure drove interaction modifications between the grasshoppers and their grass host plants and between grasshoppers and their spider predators, and 3) these interaction modifications were entirely mediated by plasticity in grasshopper behavior. Overall, ecosystem engineering by sheep grazing strengthened bottom-up effects and weakened top-down effects via trait-mediated interactions, resulting in a nearly two-fold increase in grasshopper densities. Interestingly, the grasshopper behavioral shifts which reduced spider per capita predation rates in the microcosms did not translate to reduced spider predation rates at the larger system scale because increased grasshopper densities offset behavioral effects at larger scales. Our findings demonstrate that 1) ecosystem engineering can strongly alter trophic interactions, 2) such effects can be driven by cryptic trait-mediated interactions, and 3) the relative importance of trait- versus density effects as measured by microcosm experiments may not reflect the importance of these processes at realistic ecological scales due to scale-dependent interactions.

Keywords: bottom-up effects, ecosystem engineering, trait-mediated indirect effects, top-down effects

Procedia PDF Downloads 330
586 A Scoping Review of Psychosocial Interventions for the Survivors and/or Victims of Intimate Partner Violence in Low- and Middle-Income Countries

Authors: Mukondi Nethavhakone

Abstract:

The high prevalence of violence against women is a global public health problem. Our societies have become dangerous places for women. Women during their child-bearing ages are at a higher risk of experiencing emotional, physical, and sexual violence. What makes it more concerning is that these violent acts are perpetrated by family members or partners, or ex-partners. Intimate Partner Violence (IPV) is associated with long-lasting physical, reproductive, sexual, mental, and maternal health implications. Expectedly women’s mental health would dimmish as a result of experiencing IPV. The burden of violence against women is seen to be heavier in low- and middle-income countries (LMICs) compared to the rest of the world. Countries have committed to eliminating all forms of violence against women through the sustainable development goal, aiming to see changes by the year 2030. As such, various countries have implemented psychosocial interventions of different levels of impact. However, little is known, especially in low- and middle-income countries, with regard to the potential of psychosocial interventions for IPV to improve the mental health outcomes for the survivors and/or victims of IPV. Analysing the risk for IPV through a social-ecological theoretical approach, low- and middle-income countries still readdressing gender inequality which is the cause of intimate partner violence. That is why it is taking time for these countries to shift psychosocial interventions to focus more on the improvement of the mental health of the survivors. It is, therefore, against this backdrop that the researcher intends to undertake a scoping review to understand the nature and characteristics of psychosocial interventions that have been implemented in low- and middle-income countries. With the findings from the scoping review, the researcher aims to develop a conceptual framework that may be a useful resource for healthcare practitioners and researchers in low- and middle-income countries. As this area of research has not been thoroughly reviewed, the results from this scoping will determine whether a systematic review will be justifiable. Additionally, the researcher will identify gaps and opportunities for future research in this area.

Keywords: mental health improvement, psychosocial interventions, intimate partner violence, LMICs

Procedia PDF Downloads 113
585 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density

Authors: Jill Round

Abstract:

In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.

Keywords: risk management, structural equation model, supervisory practice, three lines of defense

Procedia PDF Downloads 204
584 Exploring Forest Biomass Changes in Romania in the Last Three Decades

Authors: Remus Pravalie, Georgeta Bandoc

Abstract:

Forests are crucial for humanity and biodiversity, through the various ecosystem services and functions they provide all over the world. Forest ecosystems are vital in Romania as well, through their various benefits, known as provisioning (food, wood, or fresh water), regulating (water purification, soil protection, carbon sequestration or control of climate change, floods, and other hazards), cultural (aesthetic, spiritual, inspirational, recreational or educational benefits) and supporting (primary production, nutrient cycling, and soil formation processes, with direct or indirect importance for human well-being) ecosystem services. These ecological benefits are of great importance in Romania, especially given the fact that forests cover extensive areas countrywide, i.e. ~6.5 million ha or ~27.5% of the national territory. However, the diversity and functionality of these ecosystem services fundamentally depend on certain key attributes of forests, such as biomass, which has so far not been studied nationally in terms of potential changes due to climate change and other driving forces. This study investigates, for the first time, changes in forest biomass in Romania in recent decades, based on a high volume of satellite data (Landsat images at high spatial resolutions), downloaded from the Google Earth Engine platform and processed (using specialized software and methods) across Romanian forestland boundaries from 1987 to 2018. A complex climate database was also investigated across Romanian forests over the same 32-year period, in order to detect potential similarities and statistical relationships between the dynamics of biomass and climate data. The results obtained indicated considerable changes in forest biomass in Romania in recent decades, largely triggered by the climate change that affected the country after 1987. Findings on the complex pattern of recent forest changes in Romania, which will be presented in detail in this study, can be useful to national policymakers in the fields of forestry, climate, and sustainable development.

Keywords: forests, biomass, climate change, trends, romania

Procedia PDF Downloads 140
583 The Planning and Development of Green Public Places in Urban South Africa: A Child-Friendly Approach

Authors: E. J. Cilliers, Z. Goosen

Abstract:

The impact that urban green spaces have on sustainability and quality of life is phenomenal. This is also true for the local South African environment. However, in reality green spaces in urban environments are decreasing due to growing populations, increasing urbanization and development pressure. This further impacts on the provision of child-friendly spaces, a concept that is already limited in local context. Child-friendly spaces are described as environments in which people (children) feel intimately connected to, influencing the physical, social, emotional, and ecological health of individuals and communities. The benefits of providing such spaces for the youth are well documented in literature. This research therefore aimed to investigate the concept of child-friendly spaces and its applicability to the South African planning context, in order to guide the planning of such spaces for future communities and use. Child-friendly spaces in the urban environment of the city of Durban, was used as local case study, along with two international case studies namely Mullerpier public playground in Rotterdam, the Netherlands, and Kadidjiny Park in Melville, Australia. The aim was to determine how these spaces were planned and developed and to identify tools that were used to accomplish the goal of providing successful child-friendly green spaces within urban areas. The need and significance of planning for such spaces was portrayed within the international case studies. It is confirmed that minimal provision is made for green space planning within the South African context, when there is reflected on the international examples. As a result international examples and disciples of providing child-friendly green spaces should direct planning guidelines within local context. The research concluded that child-friendly green spaces have a positive impact on the urban environment and assist in a child’s development and interaction with the natural environment. Regrettably, the planning of these child-friendly spaces is not given priority within current spatial plans, despite the proven benefits of such.

Keywords: built environment, child-friendly spaces, green spaces, public places, urban area

Procedia PDF Downloads 428
582 Bioecological Assessment of Cage Farming on the Soft Bottom Benthic Communities of the Vlora Gulf (Albania)

Authors: Ina Nasto, Denada Sota, Pudrila Haskoçelaj, Mariola Ismailaj, Hajdar Kicaj

Abstract:

Most of the fishing areas of the Mediterranean Sea are considered to be overfished, consequently fishing has decreased or is static. Considering the continuous increase in demand for fish, the option of aquaculture production has had a growing development in recent decades. The environmental impact of aquaculture in the marine ecosystem has been a subject of study for several years in the Mediterranean. In the case of the Albanian waters, and in particular the Gulf of Vlora, have had a progressive growing of aquaculture activity in the last twenty years. Given the convenient and secluded location for tourist activities, the bay of Ragusa was considered as the most suitable area to install the aquaculture cage system for the breeding of sea bass and sea bream. The impact of aquaculture in on the soft bottom benthic communities has been assessed at the biggest commercial fish farm (Alb-Adriatico Sh.P.K) established in coastal waters of Ragusa bay 30–50 m deep, in the southern part of the Gulf of Vlora. In order to determine if there is a possible impact on the aquaculture cage in benthic communities, a comparative analysis was undertaken between transects and samples with differences in distances between them and with a gradient of distance from the fish cages. A total of 275 taxa were identified (1 Foraminifera, 1 Porifera, 3 Cnidaria, 2 Platyhelminthes, 2 Nemertea, 1 Bryozoa, 171 Mollusca, 39 Annelida, 35 Crustacea, 14 Echinodermata, 1 Hemichordata, and 5 Tunicata). The anaysis showed three main habitats in the area: biocoenosis of terrigenous mud, residual areas with Possidonia oceanica and also residual assemblages of algal coralligenous. Four benthic biotic indexes were calculated (Shannon H ’, BENTIX, Simpson's Diversity and Peilou’s J’) also benthic indicators as total abundance, number of taxa and species frequency to evaluate possible ecological impact of fish cages in Ragusa bay.

Keywords: Bentix index, Benthic community, invertebrates, aquaculture, Raguza bay

Procedia PDF Downloads 84
581 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 160
580 Analysis of Sustainability of Groundwater Resources in Rote Island, Indonesia under HADCM3 Global Model Climate Scenarios: Groundwater Flow Simulation and Proposed Adaptive Strategies

Authors: Dua K. S. Y. Klaas, Monzur A. Imteaz, Ika Sudiayem, Elkan M. E. Klaas, Eldav C. M. Klaas

Abstract:

Developing tailored management strategies to ensure the sustainability of groundwater resource under climate and demographic changes is critical for tropical karst island, where relatively small watershed and highly porous soil nature make this natural resource highly susceptible and thus very sensitive to those changes. In this study, long-term impacts of climate variability on groundwater recharge and discharge at the Oemau spring, Rote Island, Indonesia were investigated. Following calibration and validation of groundwater model using MODFLOW code, groundwater flow was simulated for period of 2020-2090 under HadCM3 global model climate (GCM) scenarios, using input data of weather variables downscaled by Statistical Downscaling Model (SDSM). The reported analysis suggests that the sustainability of groundwater resources will be adversely affected by climate change during dry years. The area is projected to variably experience 2.53-22.80% decrease of spring discharge. A subsequent comprehensive set of management strategies as palliative and adaptive efforts was proposed to be implemented by relevant stakeholders to assist the community dealing with water deficit during the dry years. Three main adaptive strategies, namely socio-cultural, technical, and ecological measures, were proposed by incorporating physical and socio-economic characteristics of the area. This study presents a blueprint for assessing groundwater sustainability under climate change scenarios and developing tailored management strategies to cope with adverse impacts of climate change, which may become fundamental necessities across other tropical karst islands in the future.

Keywords: climate change, groundwater, management strategies, tropical karst island, Rote Island, Indonesia

Procedia PDF Downloads 136
579 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study

Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart

Abstract:

Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.

Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning

Procedia PDF Downloads 81
578 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 77
577 The Evaluation of the Cognitive Training Program for Older Adults with Mild Cognitive Impairment: Protocol of a Randomized Controlled Study

Authors: Hui-Ling Yang, Kuei-Ru Chou

Abstract:

Background: Studies show that cognitive training can effectively delay cognitive failure. However, there are several gaps in the previous studies of cognitive training in mild cognitive impairment: 1) previous studies enrolled mostly healthy older adults, with few recruiting older adults with cognitive impairment; 2) they also had limited generalizability and lacked long-term follow-up data and measurements of the activities of daily living functional impact. Moreover, only 37% were randomized controlled trials (RCT). 3) Limited cognitive training has been specifically developed for mild cognitive impairment. Objective: This study sought to investigate the changes in cognitive function, activities of daily living and degree of depressive symptoms in older adults with mild cognitive impairment after cognitive training. Methods: This double-blind randomized controlled study has a 2-arm parallel group design. Study subjects are older adults diagnosed with mild cognitive impairment in residential care facilities. 124 subjects will be randomized by the permuted block randomization, into intervention group (Cognitive training, CT), or active control group (Passive information activities, PIA). Therapeutic adherence, sample attrition rate, medication compliance and adverse events will be monitored during the study period, and missing data analyzed using intent-to-treat analysis (ITT). Results: Training sessions of the CT group are 45 minutes/day, 3 days/week, for 12 weeks (36 sessions each). The training of active control group is the same as CT group (45min/day, 3days/week, for 12 weeks, for a total of 36 sessions). The primary outcome is cognitive function, using the Mini-Mental Status Examination (MMSE); the secondary outcome indicators are: 1) activities of daily living, using the Lawton’s Instrumental Activities of Daily Living (IADLs) and 2) degree of depressive symptoms, using the Geriatric Depression Scale-Short form (GDS-SF). Latent growth curve modeling will be used in the repeated measures statistical analysis to estimate the trajectory of improvement by examining the rate and pattern of change in cognitive functions, activities of daily living and degree of depressive symptoms for intervention efficacy over time, and the effects will be evaluated immediate post-test, 3 months, 6 months and one year after the last session. Conclusions: We constructed a rigorous CT program adhering to the Consolidated Standards of Reporting Trials (CONSORT) reporting guidelines. We expect to determine the improvement in cognitive function, activities of daily living and degree of depressive symptoms of older adults with mild cognitive impairment after using the CT.

Keywords: mild cognitive impairment, cognitive training, randomized controlled study

Procedia PDF Downloads 428
576 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 125
575 Development of Scenarios for Sustainable Next Generation Nuclear System

Authors: Muhammad Minhaj Khan, Jaemin Lee, Suhong Lee, Jinyoung Chung, Johoo Whang

Abstract:

The Republic of Korea has been facing strong storage crisis from nuclear waste generation as At Reactor (AR) temporary storage sites are about to reach saturation. Since the country is densely populated with a rate of 491.78 persons per square kilometer, Construction of High-level waste repository will not be a feasible option. In order to tackle the storage waste generation problem which is increasing at a rate of 350 tHM/Yr. and 380 tHM/Yr. in case of 20 PWRs and 4 PHWRs respectively, the study strongly focuses on the advancement of current nuclear power plants to GEN-IV sustainable and ecological nuclear systems by burning TRUs (Pu, MAs). First, Calculations has made to estimate the generation of SNF including Pu and MA from PWR and PHWR NPPS by using the IAEA code Nuclear Fuel Cycle Simulation System (NFCSS) for the period of 2016, 2030 (including the saturation period of each site from 2024~2028), 2089 and 2109 as the number of NPPS will increase due to high import cost of non-nuclear energy sources. 2ndly, in order to produce environmentally sustainable nuclear energy systems, 4 scenarios to burnout the Plutonium and MAs are analyzed with the concentration on burning of MA only, MA and Pu together by utilizing SFR, LFR and KALIMER-600 burner reactor after recycling the spent oxide fuel from PWR through pyro processing technology developed by Korea Atomic Energy Research Institute (KAERI) which shows promising and sustainable future benefits by minimizing the HLW generation with regard to waste amount, decay heat, and activity. Finally, With the concentration on front and back end fuel cycles for open and closed fuel cycles of PWR and Pyro-SFR respectively, an overall assessment has been made which evaluates the quantitative as well as economical combativeness of SFR metallic fuel against PWR once through nuclear fuel cycle.

Keywords: GEN IV nuclear fuel cycle, nuclear waste, waste sustainability, transmutation

Procedia PDF Downloads 339
574 Developing a Sustainable Business Model for Platform-Based Applications in Small and Medium-Sized Enterprise Sawmills: A Systematic Approach

Authors: Franziska Mais, Till Gramberg

Abstract:

The paper presents the development of a sustainable business model for a platform-based application tailored for sawing companies in small and medium-sized enterprises (SMEs). The focus is on the integration of sustainability principles into the design of the business model to ensure a technologically advanced, legally sound, and economically efficient solution. Easy2IoT is a research project that aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements, and potential solutions for smart services are derived. The structuring of the business ecosystem within the application plays a central role, whereby the roles of the partners, the management of the IT infrastructure and services, as well as the design of a sustainable operator model are considered. The business model is developed using the value proposition canvas, whereby a detailed analysis of the requirements for the business model is carried out, taking sustainability into account. This includes coordination with the business model patterns, according to Gassmann, and integration into a business model canvas for the Easy2IoT product. Potential obstacles and problems are identified and evaluated in order to formulate a comprehensive and sustainable business model. In addition, sustainable payment models and distribution channels are developed. In summary, the article offers a well-founded insight into the systematic development of a sustainable business model for platform-based applications in SME sawmills, with a particular focus on the synergy of ecological responsibility and economic efficiency.

Keywords: business model, sustainable business model, IIoT, IIoT-platform, industrie 4.0, big data

Procedia PDF Downloads 50
573 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 76
572 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 64
571 Supply, Trade-offs, and Synergies Estimation for Regulating Ecosystem Services of a Local Forest

Authors: Jang-Hwan Jo

Abstract:

The supply management of ecosystem services of local forests is an essential issue as it is linked to the ecological welfare of local residents. This study aims to estimate the supply, trade-offs, and synergies of local forest regulating ecosystem services using a land cover classification map (LCCM) and a forest types map (FTM). Rigorous literature reviews and Expert Delphi analysis were conducted using the detailed variables of 1:5,000 LCCM and FTM. Land-use scoring method and Getis-Ord Gi* Analysis were utilized on detailed variables to propose a method for estimating supply, trade-offs, and synergies of the local forest regulating ecosystem services. The analysis revealed that the rank order (1st to 5th) of supply of regulating ecosystem services was Erosion prevention, Air quality regulation, Heat island mitigation, Water quality regulation, and Carbon storage. When analyzing the correlation between defined services of the entire city, almost all services showed a synergistic effect. However, when analyzing locally, trade-off effects (Heat island mitigation – Air quality regulation, Water quality regulation – Air quality regulation) appeared in the eastern and northwestern forest areas. This suggests the need to consider not only the synergy and trade-offs of the entire forest between specific ecosystem services but also the synergy and trade-offs of local areas in managing the regulating ecosystem services of local forests. The study result can provide primary data for the stakeholders to determine the initial conditions of the planning stage when discussing the establishment of policies related to the adjustment of the supply of regulating ecosystem services of the forests with limited access. Moreover, the study result can also help refine the estimation of the supply of the regulating ecosystem services with the availability of other forms of data.

Keywords: ecosystem service, getis ord gi* analysis, land use scoring method, regional forest, regulating service, synergies, trade-offs

Procedia PDF Downloads 65
570 Relearning to Learn: Approaching Sustainability by Incorporating Inuit Vernacular and Biomimicry Architecture Principles

Authors: Hakim Herbane

Abstract:

Efforts to achieve sustainability in architecture must prove their effectiveness despite various methods attempted. Biomimicry, which looks to successful natural models to promote sustainability and innovation, faces obstacles in implementing sustainability despite its restorative approach to the relationship between humans and nature. In Nunavik, Inuit communities are exploring a sustainable production system that aligns with their aspirations and meets their demands of human, technological, technical, economic, and ecological factors. Biomimicry holds promise in line with Inuit philosophy, but its failure to implement sustainability requires further investigations to remedy its deficiencies. Our literature review underscores the importance of involving the community in defining sustainability and determining the best methods for its implementation. Additionally, vernacular architecture shows valuable orientations for achieving sustainability. Moreover, reintegrating Inuit communities and their traditional architectural practices, which have successfully balanced their built environment's diverse needs and constraints, could pave the way for a sustainable Inuit-built environment in Nunavik and advance architectural biomimicry principles simultaneously. This research aims at establishing a sustainability monitoring tool for Nordic architectural process by analyzing Inuit vernacular and biomimetic architecture, in addition to the input of stakeholders involved in Inuit architecture production in Nunavik, especially Inuit. The goal is to create a practical tool (an index) to aid in designing sustainable architecture, taking into account environmental, social, and economic perspectives. Furthermore, the study seeks to authenticate strong, sustainable design principles of vernacular and biomimetic architectures. The literature review uncovered challenges and identified new opportunities. The forthcoming discourse will focus on the careful and considerate incorporation of Inuit communities’ perceptions and indigenous building practices into our methodology and the latest findings of our research.

Keywords: sustainability, biomimicry, vernacular architecture, community involvement

Procedia PDF Downloads 37
569 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 96
568 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy

Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury

Abstract:

Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.

Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling

Procedia PDF Downloads 55
567 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 207
566 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 114
565 Mapping of Forest Cover Change in the Democratic Republic of the Congo

Authors: Armand Okende, Benjamin Beaumont

Abstract:

Introduction: Deforestation is a change in the structure and composition of flora and fauna, which leads to a loss of biodiversity, production of goods and services and an increase in fires. It concerns vast territories in tropical zones particularly; this is the case of the territory of Bolobo in the current province of Maï- Ndombe in the Democratic Republic of Congo. Indeed, through this study between 2001 and 2018, we believe that it was important to show and analyze quantitatively the important forests changes and analyze quantitatively. It’s the overall objective of this study because, in this area, we are witnessing significant deforestation. Methodology: Mapping and quantification are the methodological approaches that we have put forward to assess the deforestation or forest changes through satellite images or raster layers. These satellites data from Global Forest Watch are integrated into the GIS software (GRASS GIS and Quantum GIS) to represent the loss of forest cover that has occurred and the various changes recorded (e.g., forest gain) in the territory of Bolobo. Results: The results obtained show, in terms of quantifying deforestation for the periods 2001-2006, 2007-2012 and 2013-2018, the loss of forest area in hectares each year. The different change maps produced during different study periods mentioned above show that the loss of forest areas is gradually increasing. Conclusion: With this study, knowledge of forest management and protection is a challenge to ensure good management of forest resources. To do this, it is wise to carry out more studies that would optimize the monitoring of forests to guarantee the ecological and economic functions they provide in the Congo Basin, particularly in the Democratic Republic of Congo. In addition, the cartographic approach, coupled with the geographic information system and remote sensing proposed by Global Forest Watch using raster layers, provides interesting information to explain the loss of forest areas.

Keywords: deforestation, loss year, forest change, remote sensing, drivers of deforestation

Procedia PDF Downloads 117
564 Monitoring Spatial Distribution of Blue-Green Algae Blooms with Underwater Drones

Authors: R. L. P. De Lima, F. C. B. Boogaard, R. E. De Graaf-Van Dinther

Abstract:

Blue-green algae blooms (cyanobacteria) is currently a relevant ecological problem that is being addressed by most water authorities in the Netherlands. These can affect recreation areas by originating unpleasant smells and toxins that can poison humans and animals (e.g. fish, ducks, dogs). Contamination events usually take place during summer months, and their frequency is increasing with climate change. Traditional monitoring of this bacteria is expensive, labor-intensive and provides only limited (point sampling) information about the spatial distribution of algae concentrations. Recently, a novel handheld sensor allowed water authorities to quicken their algae surveying and alarm systems. This study converted the mentioned algae sensor into a mobile platform, by combining it with an underwater remotely operated vehicle (also equipped with other sensors and cameras). This provides a spatial visualization (mapping) of algae concentrations variations within the area covered with the drone, and also in depth. Measurements took place in different locations in the Netherlands: i) lake with thick silt layers at the bottom, very eutrophic former bottom of the sea and frequent / intense mowing regime; ii) outlet of waste water into large reservoir; iii) urban canal system. Results allowed to identify probable dominant causes of blooms (i), provide recommendations for the placement of an outlet, day-night differences in algae behavior (ii), or the highlight / pinpoint higher algae concentration areas (iii). Although further research is still needed to fully characterize these processes and to optimize the measuring tool (underwater drone developments / improvements), the method here presented can already provide valuable information about algae behavior and spatial / temporal variability and shows potential as an efficient monitoring system.

Keywords: blue-green algae, cyanobacteria, underwater drones / ROV / AUV, water quality monitoring

Procedia PDF Downloads 186
563 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa

Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu

Abstract:

After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.

Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration

Procedia PDF Downloads 59
562 Novel EGFR Ectodomain Mutations and Resistance to Anti-EGFR and Radiation Therapy in H&N Cancer

Authors: Markus Bredel, Sindhu Nair, Hoa Q. Trummell, Rajani Rajbhandari, Christopher D. Willey, Lewis Z. Shi, Zhuo Zhang, William J. Placzek, James A. Bonner

Abstract:

Purpose: EGFR-targeted monoclonal antibodies (mAbs) provide clinical benefit in some patients with H&N squamous cell carcinoma (HNSCC), but others progress with minimal response. Missense mutations in the EGFR ectodomain (ECD) can be acquired under mAb therapy by mimicking the effect of large deletions on receptor untethering and activation. Little is known about the contribution of EGFR ECD mutations to EGFR activation and anti-EGFR response in HNSCC. Methods: We selected patient-derived HNSCC cells (UM-SCC-1) for resistance to mAb Cetuximab (CTX) by repeated, stepwise exposure to mimic what may occur clinically and identified two concurrent EGFR ECD mutations (UM-SCC-1R). We examined the competence of the mutants to bind EGF ligand or CTX. We assessed the potential impact of the mutations through visual analysis of space-filling models of the native sidechains in the original structures vs. their respective side-chain mutations. We performed CRISPR in combination with site-directed mutagenesis to test for the effect of the mutants on ligand-independent EGFR activation and sorting. We determined the effects on receptor internalization, endocytosis, downstream signaling, and radiation sensitivity. Results: UM-SCC-1R cells carried two non-synonymous missense mutations (G33S and N56K) mapping to domain I in or near the EGF binding pocket of the EGFR ECD. Structural modeling predicted that these mutants restrict the adoption of a tethered, inactive EGFR conformation while not permitting association of EGFR with the EGF ligand or CTX. Binding studies confirmed that the mutant, untethered receptor displayed a reduced affinity for both EGF and CTX but demonstrated sustained activation and presence at the cell surface with diminished internalization and sorting for endosomal degradation. Single and double-mutant models demonstrated that the G33S mutant is dominant over the N56K mutant in its effect on EGFR activation and EGF binding. CTX-resistant UM-SCC-1R cells demonstrated cross-resistance to mAb Panitumuab but, paradoxically, remained sensitive to the reversible receptor tyrosine kinase inhibitor Erlotinib. Conclusions: HNSCC cells can select for EGFR ECD mutations under EGFR mAb exposure that converge to trap the receptor in an open, constitutively activated state. These mutants impede the receptor’s competence to bind mAbs and EGF ligand and alter its endosomal trafficking, possibly explaining certain cases of clinical mAb and radiation resistance.

Keywords: head and neck cancer, EGFR mutation, resistance, cetuximab

Procedia PDF Downloads 74
561 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 51
560 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 227
559 Cocoon Characterization of Sericigenous Insects in North-East India and Prospects

Authors: Tarali Kalita, Karabi Dutta

Abstract:

The North Eastern Region of India, with diverse climatic conditions and a wide range of ecological habitats, makes an ideal natural abode for a good number of silk-producing insects. Cocoon is the economically important life stage from where silk of economic importance is obtained. In recent years, silk-based biomaterials have gained considerable attention, which is dependent on the structure and properties of the silkworm cocoons as well as silk yarn. The present investigation deals with the morphological study of cocoons, including cocoon color, cocoon size, shell weight and shell ratio of eleven different species of silk insects collected from different regions of North East India. The Scanning Electron Microscopic study and X-ray photoelectron spectroscopy were performed to know the arrangement of silk threads in cocoons and the atomic elemental analysis, respectively. Further, collected cocoons were degummed and reeled/spun on a reeling machine or spinning wheel to know the filament length, linear density and tensile strength by using Universal Testing Machine. The study showed significant variation in terms of cocoon color, cocoon shape, cocoon weight and filament packaging. XPS analysis revealed the presence of elements (Mass %) C, N, O, Si and Ca in varying amounts. The wild cocoons showed the presence of Calcium oxalate crystals which makes the cocoons hard and needs further treatment to reel. In the present investigation, the highest percentage of strain (%) and toughness (g/den) were observed in Antheraea assamensis, which implies that the muga silk is a more compact packing of molecules. It is expected that this study will be the basis for further biomimetic studies to design and manufacture artificial fiber composites with novel morphologies and associated material properties.

Keywords: cocoon characterization, north-east India, prospects, silk characterization

Procedia PDF Downloads 70
558 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles

Authors: Antonina A. Shumakova, Sergey A. Khotimchenko

Abstract:

The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.

Keywords: absorption, cadmium, engineered nanomaterials, lead

Procedia PDF Downloads 74