Search results for: comparative modeling
305 Regional Response of Crop Productivity to Global Warming - A Case Study of the Heat Stress and Cold Stress on UK Rapeseed Crop Over 1961-2020
Authors: Biao Hu, Mark E. J. Cutler, Alexandra C. Morel
Abstract:
Global climate change introduces both opportunities and challenges for crop productivity, with differences in temperature stress across latitudes and crop types, one of the most important meteorological factors impacting crop productivity. The development and productivity of crops are particularly impacted when temperatures occur outwith their preferred ranges, which has implications for global agri-food sector. This study investigated the spatiotemporal dynamics of heat stress and cold stress on UK arable lands for rapeseed cropping between 1961 and 2020, using a 1 km spatial resolution temperature dataset. Stress indices, including heat stress index (fHS) defined as the ratio of “Tmax - Tcrit_h” to “Tlimit_h - Tcrit_h” where Tmax, Tcrit_h and Tlimit_h represent the daily maximum temperature (°C), critical high temperature threshold (°C) and limiting high temperature threshold (°C) of rapeseed crop respectively; cold degree days (CDD) as the difference between daily Tmin (minimum temperature) and Tcrit_l (critical low temperature threshold); and a normalized rapeseed production loss index (fRPL) as the product of fHS and attainable rapeseed yield in the same land pixel were established. The values of fHS and CDD, percentages of days experiencing each stress and fRPL were investigated. Results found increasing fHS and the areas impacted by heat stress during flowering (from April to May) and reproductive (from April to July) stages over time, with the mean fHS being negatively correlated with latitude. This pattern of increased heat stress agrees with previous research on rapeseed cropping, which have been noted at global scale in response to changes in climate. The decreasing number of CDD and frequency of cold stress suggest cold stress decreased during flowering, vegetative (from September to March next year) and reproductive stages, and the magnitude of cold stress in the south of the UK was smaller to that compared to northern regions over the studied periods. The decreasing CDD matches observed declining cold stress of global rapeseed and of other crops such as rice in the northern hemisphere. Notably, compared with previous studies which mainly tracked the trends of heat stress and cold stress individually, this study conducted a comparative analysis of the rate of their changes and found heat stress of rapeseed crops in the UK was increasing at a faster rate than cold stress, which was seen to decrease during flowering. The increasing values of fRPL, with statistically significant differences (p < 0.05) between regions of the UK, suggested an increasing loss in rapeseed due to heat stress in the studied period. The largest increasing trend in heat stress was observed in South-eastern England, where a decreasing cold stress was taking place. While the present study observed a relatively slowly increasing heat stress, there is a worrying trend of increasing heat stress for rapeseed cropping into the future, as the cases of other main rapeseed cropping systems in the northern hemisphere including China, European counties, the US, and Canada. This study demonstrates the negative impact of global warming on rapeseed cropping, highlighting the adaptation and mitigations strategies for sustainable rapeseed cultivation across the globe.Keywords: rapeseed, UK, heat stress, cold stress, global climate change, spatiotemporal analysis, production loss index
Procedia PDF Downloads 69304 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents
Authors: A. Kesraoui, M. Seffen
Abstract:
Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.Keywords: adsorption, alternating current, dyes, modeling
Procedia PDF Downloads 164303 Coil-Over Shock Absorbers Compared to Inherent Material Damping
Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major
Abstract:
Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.Keywords: damper structures, material damping, PDMS, TPU
Procedia PDF Downloads 118302 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density
Authors: Jill Round
Abstract:
In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.Keywords: risk management, structural equation model, supervisory practice, three lines of defense
Procedia PDF Downloads 227301 Impact of COVID-19 on Study Migration
Authors: Manana Lobzhanidze
Abstract:
The COVID-19 pandemic has made significant changes in migration processes, notably changes in the study migration process. The constraints caused by the COVID-19 pandemic led to changes in the studying process, which negatively affected its efficiency. The educational process has partially or completely shifted to distance learning; Both labor and study migration have increased significantly in the world. The employment and education market has become global and consequently, a number of challenges have arisen for employers, researchers, and businesses. The role of preparing qualified personnel in achieving high productivity is justified, the benefits for employers and employees are assessed on the one hand, and the role of study migration for the country’s development is examined on the other hand. Research methods. The research is based on methods of analysis and synthesis, quantitative and qualitative, groupings, relative and mean quantities, graphical representation, comparison, analysis and etc. In-depth interviews were conducted with experts to determine quantitative and qualitative indicators. Research findings. Factors affecting study migration are analysed in the paper and the environment that stimulates migration is explored. One of the driving forces of migration is considered to be the desire for receiving higher pay. Levels and indicators of study migration are studied by country. Comparative analysis has found that study migration rates are high in countries where the price of skilled labor is high. The productivity of individuals with low skills is low, which negatively affects the economic development of countries. It has been revealed that students leave the country to improve their skills during study migration. The process mentioned in the article is evaluated as a positive event for a developing country, as individuals are given the opportunity to share the technology of developed countries, gain knowledge, and then introduce it in their own country. The downside of study migration is the return of a small proportion of graduates from developed economies to their home countries. The article concludes that countries with emerging economies devote less resources to research and development, while this is a priority in developed countries, allowing highly skilled individuals to use their skills efficiently. The paper studies the national education system examines the level of competition in the education market and the indicators of educational migration. The level of competition in the education market and the indicators of educational migration are studied. The role of qualified personnel in achieving high productivity is substantiated, the benefits of employers and employees are assessed on the one hand, and the role of study migration in the development of the country is revealed on the other hand. The paper also analyzes the level of competition in the education and labor markets and identifies indicators of study migration. During the pandemic period, there was a great demand for the digital technologies. Open access to a variety of comprehensive platforms will significantly reduce study migration to other countries. As a forecast, it can be said that the intensity of the use of e-learning platforms will be increased significantly in the post-pandemic period. The paper analyzes the positive and negative effects of study migration on economic development, examines the challenges of study migration in light of the COVID-19 pandemic, suggests ways to avoid negative consequences, and develops recommendations for improving the study migration process in the post-pandemic period.Keywords: study migration, COVID-19 pandemic, factors affecting migration, economic development, post-pandemic migration
Procedia PDF Downloads 129300 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 181299 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering
Authors: Olia Kanevskaia
Abstract:
Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.Keywords: private order, standardization, standard-setting organizations, transnational law
Procedia PDF Downloads 166298 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study
Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart
Abstract:
Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning
Procedia PDF Downloads 104297 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 100296 The Evaluation of the Cognitive Training Program for Older Adults with Mild Cognitive Impairment: Protocol of a Randomized Controlled Study
Authors: Hui-Ling Yang, Kuei-Ru Chou
Abstract:
Background: Studies show that cognitive training can effectively delay cognitive failure. However, there are several gaps in the previous studies of cognitive training in mild cognitive impairment: 1) previous studies enrolled mostly healthy older adults, with few recruiting older adults with cognitive impairment; 2) they also had limited generalizability and lacked long-term follow-up data and measurements of the activities of daily living functional impact. Moreover, only 37% were randomized controlled trials (RCT). 3) Limited cognitive training has been specifically developed for mild cognitive impairment. Objective: This study sought to investigate the changes in cognitive function, activities of daily living and degree of depressive symptoms in older adults with mild cognitive impairment after cognitive training. Methods: This double-blind randomized controlled study has a 2-arm parallel group design. Study subjects are older adults diagnosed with mild cognitive impairment in residential care facilities. 124 subjects will be randomized by the permuted block randomization, into intervention group (Cognitive training, CT), or active control group (Passive information activities, PIA). Therapeutic adherence, sample attrition rate, medication compliance and adverse events will be monitored during the study period, and missing data analyzed using intent-to-treat analysis (ITT). Results: Training sessions of the CT group are 45 minutes/day, 3 days/week, for 12 weeks (36 sessions each). The training of active control group is the same as CT group (45min/day, 3days/week, for 12 weeks, for a total of 36 sessions). The primary outcome is cognitive function, using the Mini-Mental Status Examination (MMSE); the secondary outcome indicators are: 1) activities of daily living, using the Lawton’s Instrumental Activities of Daily Living (IADLs) and 2) degree of depressive symptoms, using the Geriatric Depression Scale-Short form (GDS-SF). Latent growth curve modeling will be used in the repeated measures statistical analysis to estimate the trajectory of improvement by examining the rate and pattern of change in cognitive functions, activities of daily living and degree of depressive symptoms for intervention efficacy over time, and the effects will be evaluated immediate post-test, 3 months, 6 months and one year after the last session. Conclusions: We constructed a rigorous CT program adhering to the Consolidated Standards of Reporting Trials (CONSORT) reporting guidelines. We expect to determine the improvement in cognitive function, activities of daily living and degree of depressive symptoms of older adults with mild cognitive impairment after using the CT.Keywords: mild cognitive impairment, cognitive training, randomized controlled study
Procedia PDF Downloads 455295 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova
Abstract:
The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.Keywords: bacteriocins, cross-contamination, mathematical model, temperature
Procedia PDF Downloads 148294 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 110293 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections
Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette
Abstract:
A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation
Procedia PDF Downloads 92292 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 113291 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy
Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury
Abstract:
Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling
Procedia PDF Downloads 77290 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 220289 The Shadowy History of Berlin Underground: 1939-45/Der Schattenmann: Tagebuchaufzeichnungen 1938-1945
Authors: Christine Wiesenthal
Abstract:
This paper asks how to read a particularly vexed and complicated life writing text. For over half a century, the wartime journals of Ruth Andreas Friedrich (1901-1977) circulated as among a handful of more or less authoritative and “authentic” first-hand accounts of German resistance under Hitler. A professional journalist, Andreas Friedrich is remembered today largely through her publications at the war’s end, which appeared in English as Berlin Underground (published by Henry Holt in 1947), just before their publication in Germany as Der Schattenmann “The Shadow Man” (also in 1947). A British edition by the now obscure Latimer House Limited (London) followed in 1948; it is based closely on but is not identical to, the Henry Holt American edition, which in turn differs significantly from its German counterpart. Both Berlin Underground and Der Schattenmann figure Andreas-Friedrich as a key figure in an anti-fascist cell that operated in Berlin under the code name “Uncle Emil,” and provide a riveting account of political terror, opportunism, and dissent under the Nazi regime. Recent scholars have, however, begun to raise fascinating and controversial questions about Andreas-Friedrich’s own writing/reconstruction process in compiling the journals and about her highly selective curatorial role and claims. The apparent absence of any surviving original manuscript for Andreas-Friedrich’s journals amplifies the questions around them. Crucially, so too does the role of the translator of the English editions of Berlin Underground, the enigmatic June Barrows Mussey, a subject that has thus far gone virtually unnoticed and which this paper will focus on. Mussey, a prolific American translator, simultaneously cultivated a career as a professional magician, publishing a number of books on that subject under the alias Henry Hay. While the record indicates that Mussey attempted to compartmentalize his professional life, research into the publishing and translation history of Berlin Underground suggests that the two roles converge in the fact of the translator’s invisibility, by effacing the traces of his own hand and leaving unmarked his own significant textual interventions, Mussey, in effect, edited, abridged, and altered Andreas Friedrich’s journals for the second time. In fact, it could be said that while the fictitious “Uncle Emil” is positioned as “the shadow man” of the German edition, Mussey himself also emerges as precisely that in the English rendering of the journals. The implications of Mussey’s translation of Andreas Friedrich’s journals are one of the most important un-examined gaps in the shadowy publishing history of Berlin Underground, a history full of “tricks” (Mussey’s words) and illusions of transparency. Based largely on archival research of unpublished materials and methods of close reading and comparative analysis, this study will seek to convey some preliminary insights and exploratory work and frame questions toward what is ultimately envisioned as an experimental project in poetic historiography. As this work is still in the early stages, it would be especially welcome to have the opportunity provided by this conference to connect with a community of life writing colleagues who might help think through some of the challenges and possibilities that lie ahead.Keywords: women’s wartime diaries, translation studies, auto/biographical theory, politics of life writing
Procedia PDF Downloads 57288 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 131287 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa
Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu
Abstract:
After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration
Procedia PDF Downloads 92286 Novel EGFR Ectodomain Mutations and Resistance to Anti-EGFR and Radiation Therapy in H&N Cancer
Authors: Markus Bredel, Sindhu Nair, Hoa Q. Trummell, Rajani Rajbhandari, Christopher D. Willey, Lewis Z. Shi, Zhuo Zhang, William J. Placzek, James A. Bonner
Abstract:
Purpose: EGFR-targeted monoclonal antibodies (mAbs) provide clinical benefit in some patients with H&N squamous cell carcinoma (HNSCC), but others progress with minimal response. Missense mutations in the EGFR ectodomain (ECD) can be acquired under mAb therapy by mimicking the effect of large deletions on receptor untethering and activation. Little is known about the contribution of EGFR ECD mutations to EGFR activation and anti-EGFR response in HNSCC. Methods: We selected patient-derived HNSCC cells (UM-SCC-1) for resistance to mAb Cetuximab (CTX) by repeated, stepwise exposure to mimic what may occur clinically and identified two concurrent EGFR ECD mutations (UM-SCC-1R). We examined the competence of the mutants to bind EGF ligand or CTX. We assessed the potential impact of the mutations through visual analysis of space-filling models of the native sidechains in the original structures vs. their respective side-chain mutations. We performed CRISPR in combination with site-directed mutagenesis to test for the effect of the mutants on ligand-independent EGFR activation and sorting. We determined the effects on receptor internalization, endocytosis, downstream signaling, and radiation sensitivity. Results: UM-SCC-1R cells carried two non-synonymous missense mutations (G33S and N56K) mapping to domain I in or near the EGF binding pocket of the EGFR ECD. Structural modeling predicted that these mutants restrict the adoption of a tethered, inactive EGFR conformation while not permitting association of EGFR with the EGF ligand or CTX. Binding studies confirmed that the mutant, untethered receptor displayed a reduced affinity for both EGF and CTX but demonstrated sustained activation and presence at the cell surface with diminished internalization and sorting for endosomal degradation. Single and double-mutant models demonstrated that the G33S mutant is dominant over the N56K mutant in its effect on EGFR activation and EGF binding. CTX-resistant UM-SCC-1R cells demonstrated cross-resistance to mAb Panitumuab but, paradoxically, remained sensitive to the reversible receptor tyrosine kinase inhibitor Erlotinib. Conclusions: HNSCC cells can select for EGFR ECD mutations under EGFR mAb exposure that converge to trap the receptor in an open, constitutively activated state. These mutants impede the receptor’s competence to bind mAbs and EGF ligand and alter its endosomal trafficking, possibly explaining certain cases of clinical mAb and radiation resistance.Keywords: head and neck cancer, EGFR mutation, resistance, cetuximab
Procedia PDF Downloads 97285 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 72284 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 243283 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles
Authors: Antonina A. Shumakova, Sergey A. Khotimchenko
Abstract:
The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.Keywords: absorption, cadmium, engineered nanomaterials, lead
Procedia PDF Downloads 91282 A Principal’s Role in Creating and Sustaining an Inclusive Environment
Authors: Yazmin Pineda Zapata
Abstract:
Leading a complete school and culture transformation can be a daunting task for any administrator. This is especially true when change agents are advocating for inclusive reform in their schools. As leaders embark on this journey, they must ascertain that an inclusive environment is not a place, a classroom, or a resource setting; it is a place of acceptance nurtured by supportive and meaningful learning opportunities where all students can thrive. A qualitative approach, phenomenology, was used to investigate principals’ actions and behaviors that supported inclusive schooling for students with disabilities. Specifically, this study sought to answer the following research question: How do leaders develop and maintain inclusive education? Fourteen K-12 principals purposefully selected from various sources (e.g., School Wide Integrated Framework for Transformation (SWIFT), The Maryland Coalition for Inclusive Education (MCIE), The Arc of Texas Inclusion Works organization, The Association for Persons with Severe Handicaps (TASH), the CAL State Summer Institute in San Marcos, and the PEAK Parent Center and/or other recognitions were interviewed individually using a semi-structured protocol. Upon completion of data collection, all interviews were transcribed and marked using A priori coding to analyze the responses and establish a correlation among Villa and Thousand’s five organizational supports to achieve inclusive educational reform: Vision, Skills, Incentives, Resources, and Action Plan. The findings of this study reveal the insights of principals who met specific criteria and whose schools had been highlighted as exemplary inclusive schools. Results show that by implementing the five organizational supports, principals were able to develop and sustain successful inclusive environments where both teachers and students were motivated, made capable, and supported through the redefinition and restructuring of systems within the school. Various key details of the five variables for change depict essential components within these systems, which include quality professional development, coaching and modeling of co-teaching strategies, collaborative co-planning, teacher leadership, and continuous stakeholder (e.g., teachers, students, support staff, and parents) involvement. The administrators in this study proved the valuable benefits of inclusive education for students with disabilities and their typically developing peers. Together, along with their teaching and school community, school leaders became capable stakeholders that promoted the vision of inclusion, planned a structured approach, and took action to make it a reality.Keywords: Inclusive education, leaders, principals, shared-decision making, shared leadership, special education, sustainable change
Procedia PDF Downloads 78281 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus
Authors: Yesim Tumsek, Erkan Celebi
Abstract:
In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus
Procedia PDF Downloads 274280 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 380279 Mesoporous BiVO4 Thin Films as Efficient Visible Light Driven Photocatalyst
Authors: Karolina Ordon, Sandrine Coste, Malgorzata Makowska-Janusik, Abdelhadi Kassiba
Abstract:
Photocatalytic processes play key role in the production of a new source of energy (as hydrogen), design of self-cleaning surfaces or for the environment preservation. The most challenging task deals with the purification of water distinguished by high efficiency. In the mentioned process, organic pollutants in solutions are decomposed to the simple, non-toxic compounds as H2O and CO2. The most known photocatalytic materials are ZnO, CdS and TiO2 semiconductors with a particular involvement of TiO2 as an efficient photocatalysts even with a high band gap equal to 3.2 eV which exploit only UV radiation from solar emitted spectrum. However, promising material with visible light induced photoactivity was searched through the monoclinic polytype of BiVO4 which has energy gap about 2.4 eV. As required in heterogeneous photocatalysis, the high contact surface is required. Also, BiVO4 as photocatalyst can be optimized by increasing its surface area by achieving the mesoporous structure synthesize. The main goal of the present work consists in the synthesis and characterization of BiVO4 mesoporous thin film. The synthesis method based on sol-gel was carried out using a standard surfactants such as P123 and F127. The thin film was deposited by spin and dip coating method. Then, the structural analysis of the obtained material was performed thanks to X-ray diffraction (XRD) and Raman spectroscopy. The surface of resulting structure was investigated using a scanning electron microscopy (SEM). The computer simulations based on modeling the optical and electronic properties of bulk BiVO4 by using DFT (density functional theory) methodology were carried out. The semiempirical parameterized method PM6 was used to compute the physical properties of BiVO4 nanostructures. The Raman and IR absorption spectra were also measured for synthesized mesoporous material, and the results were compared with the theoretical predictions. The simulations of nanostructured BiVO4 have pointed out the occurrence of quantum confinement for nanosized clusters leading to widening of the band gap. This result overcame the relevance of nanosized objects to harvest wide part of the solar spectrum. Also, a balance was searched experimentally through the mesoporous nature of the films devoted to enhancing the contact surface as required for heterogeneous catalysis without to lower the nanocrystallite size under some critical sizes inducing an increased band gap. The present contribution will discuss the relevant features of the mesoporous films with respect to their photocatalytic responses.Keywords: bismuth vanadate, photocatalysis, thin film, quantum-chemical calculations
Procedia PDF Downloads 331278 Impact of Primary Care Telemedicine Consultations On Health Care Resource Utilisation: A Systematic Review
Authors: Anastasia Constantinou, Stephen Morris
Abstract:
Background: The adoption of synchronous and asynchronous telemedicine modalities for primary care consultations has exponentially increased since the COVID-19 pandemic. However, there is limited understanding of how virtual consultations influence healthcare resource utilization and other quality measures including safety, timeliness, efficiency, patient and provider satisfaction, cost-effectiveness and environmental impact. Aim: Quantify the rate of follow-up visits, emergency department visits, hospitalizations, request for investigations and prescriptions and comment on the effect on different quality measures associated with different telemedicine modalities used for primary care services and primary care referrals to secondary care Design and setting: Systematic review in primary care Methods: A systematic search was carried out across three databases (Medline, PubMed and Scopus) between August and November 2023, using terms related to telemedicine, general practice, electronic referrals, follow-up, use and efficiency and supported by citation searching. This was followed by screening according to pre-defined criteria, data extraction and critical appraisal. Narrative synthesis and metanalysis of quantitative data was used to summarize findings. Results: The search identified 2230 studies; 50 studies are included in this review. There was a prevalence of asynchronous modalities in both primary care services (68%) and referrals from primary care to secondary care (83%), and most of the study participants were females (63.3%), with mean age of 48.2. The average follow-up for virtual consultations in primary care was 28.4% (eVisits: 36.8%, secure messages 18.7%, videoconference 23.5%) with no significant difference between them or F2F consultations. There was an average annual reduction of primary care visits by 0.09/patient, an increase in telephone visits by 0.20/patient, an increase in ED encounters by 0.011/patient, an increase in hospitalizations by 0.02/patient and an increase in out of hours visits by 0.019/patient. Laboratory testing was requested on average for 10.9% of telemedicine patients, imaging or procedures for 5.6% and prescriptions for 58.7% of patients. When looking at referrals to secondary care, on average 36.7% of virtual referrals required follow-up visit, with the average rate of follow-up for electronic referrals being higher than for videoconferencing (39.2% vs 23%, p=0.167). Technical failures were reported on average for 1.4% of virtual consultations to primary care. When using carbon footprint estimates, we calculate that the use of telemedicine in primary care services can potentially provide a net decrease in carbon footprint by 0.592kgCO2/patient/year. When follow-up rates are taken into account, we estimate that virtual consultations reduce carbon footprint for primary care services by 2.3 times, and for secondary care referrals by 2.2 times. No major concerns regarding quality of care, or patient satisfaction were identified. 5/7 studies that addressed cost-effectiveness, reported increased savings. Conclusions: Telemedicine provides quality, cost-effective, and environmentally sustainable care for patients in primary care with inconclusive evidence regarding the rates of subsequent healthcare utilization. The evidence is limited by heterogeneous, small-scale studies and lack of prospective comparative studies. Further research to identify the most appropriate telemedicine modality for different patient populations, clinical presentations, service provision (e.g. used to follow-up patients instead of initial diagnosis) as well as further education for patients and providers alike on how to make best use of this service is expected to improve outcomes and influence practice.Keywords: telemedicine, healthcare utilisation, digital interventions, environmental impact, sustainable healthcare
Procedia PDF Downloads 62277 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27
Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer
Abstract:
At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis
Procedia PDF Downloads 187276 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk
Authors: Moses Jenkins
Abstract:
Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.Keywords: insulation, condensation, masonry, historic
Procedia PDF Downloads 179