Search results for: mechanical modeling
743 Structural Design of a Relief Valve Considering Strength
Authors: Nam-Hee Kim, Jang-Hoon Ko, Kwon-Hee Lee
Abstract:
A relief valve is a mechanical element to keep safety by controlling high pressure. Usually, the high pressure is relieved by using the spring force and letting the fluid to flow from another way out of system. When its normal pressure is reached, the relief valve can return to initial state. The relief valve in this study has been applied for pressure vessel, evaporator, piping line, etc. The relief valve should be designed for smooth operation and should satisfy the structural safety requirement under operating condition. In general, the structural analysis is performed by following fluid flow analysis. In this process, the FSI (Fluid-Structure Interaction) is required to input the force obtained from the output of the flow analysis. Firstly, this study predicts the velocity profile and the pressure distribution in the given system. In this study, the assumptions for flow analysis are as follows: • The flow is steady-state and three-dimensional. • The fluid is Newtonian and incompressible. • The walls of the pipe and valve are smooth. The flow characteristics in this relief valve does not induce any problem. The commercial software ANSYS/CFX is utilized for flow analysis. On the contrary, very high pressure may cause structural problem due to severe stress. The relief valve is made of body, bonnet, guide, piston and nozzle, and its material is stainless steel. To investigate its structural safety, the worst case loading is considered as the pressure of 700 bar. The load is applied to inside the valve, which is greater than the load obtained from FSI. The maximum stress is calculated as 378 MPa by performing the finite element analysis. However, the value is greater than its allowable value. Thus, an alternative design is suggested to improve the structural performance through case study. We found that the sensitive design variable to the strength is the shape of the nozzle. The case study is to vary the size of the nozzle. Finally, it can be seen that the suggested design satisfy the structural design requirement. The FE analysis is performed by using the commercial software ANSYS/Workbench.Keywords: relief valve, structural analysis, structural design, strength, safety factor
Procedia PDF Downloads 303742 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density
Authors: Jill Round
Abstract:
In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.Keywords: risk management, structural equation model, supervisory practice, three lines of defense
Procedia PDF Downloads 224741 Adsorption of Chlorinated Pesticides in Drinking Water by Carbon Nanotubes
Authors: Hacer Sule Gonul, Vedat Uyak
Abstract:
Intensive use of pesticides in agricultural activity causes mixing of these compounds into water sources with surface flow. Especially after the 1970s, a number of limitations imposed on the use of chlorinated pesticides that have a carcinogenic risk potential and regulatory limit have been established. These chlorinated pesticides discharge to water resources, transport in the water and land environment and accumulation in the human body through the food chain raises serious health concerns. Carbon nanotubes (CNTs) have attracted considerable attention from on all because of their excellent mechanical, electrical, and environmental characteristics. Due to CNT particles' high degree of hydrophobic surfaces, these nanoparticles play critical role in the removal of water contaminants of natural organic matters, pesticides and phenolic compounds in water sources. Health concerns associated with chlorinated pesticides requires the removal of such contaminants from aquatic environment. Although the use of aldrin and atrazine was restricted in our country, repatriation of illegal entry and widespread use of such chemicals in agricultural areas cause increases for the concentration of these chemicals in the water supply. In this study, the compounds of chlorinated pesticides such as aldrin and atrazine compounds would be tried to eliminate from drinking water with carbon nanotube adsorption method. Within this study, 2 different types of CNT would be used including single-wall (SWCNT) and multi-wall (MWCNT) carbon nanotubes. Adsorption isotherms within the scope of work, the parameters affecting the adsorption of chlorinated pesticides in water are considered as pH, contact time, CNT type, CNT dose and initial concentration of pesticides. As a result, under conditions of neutral pH conditions with MWCNT respectively for atrazine and aldrin obtained adsorption capacity of determined as 2.24 µg/mg ve 3.84 µg/mg. On the other hand, the determined adsorption capacity rates for SWCNT for aldrin and atrazine has identified as 3.91 µg/mg ve 3.92 µg/mg. After all, each type of pesticide that provides superior performance in relieving SWCNT particles has emerged.Keywords: pesticide, drinking water, carbon nanotube, adsorption
Procedia PDF Downloads 171740 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 174739 The Electrophysiology Study Results in Patients with Guillain Barre Syndrome (GBS): A Retrospective Study in a TertiaryHospital in Cebu City, Philippines
Authors: Dyna Ann C. Sevilles, Noel J. Belonguel, Jarungchai Anton S. Vatanagul, Mary Jeanne O. Flordelis, Grace G. Anota
Abstract:
Guillain Barre syndrome is an acute inflammatory polyradiculoneuropathy causing progressive symmetrical weakness which can be debilitating to the patient. Early diagnosis is important especially in the acute phase when treatment favors good outcome and reduces the incidence of the need for mechanical ventilation. Electrodiagnostic studies aid in the evaluation of patients suspected with GBS. However, the characteristic electrical changes may not be evident until after several weeks. Thus, studies performed early in the course may give unclear results. The aim of this study is to associate the symptom onset of patients diagnosed with Guillain Barre syndrome with the EMG NCV results and determine the earliest time when there is evident findings supporting the diagnosis. This is a retrospective descriptive chart review study involving patients of >/= 18 years of age with GBS written on their charts in a Tertiaty hospital in Cebu City, Philippines from January 2000 to July 2014. Twenty patients showed electrodiagnostic findings suggestive of GBS. The mean day of illness when EMG NCV was carried out was 7 days. The earliest with suggestive findings was done on day 2 (10%) of illness. Moreover, the highest frequency with positive results was done on day 3 (20%) of illness. Based on the Dutch Guillain Barre Study group criteria, the most frequent variables noted were: prolonged distal motor latency in both median and ulnar nerves(65%) and both peroneal and tibial nerves (71%); and reduced CMAP in both median and ulnar nerves (65%) and both tibial and peroneal nerves (71%). The EMG NCV findings showed majority of demyelinating type (59%). Electrodiagnostic studies are helpful in aiding the physician in the diagnosis and treatment of the disease in the early stage. Based on this study, neurophysiologic evidence of GBS can be seen in as early as day 2 of clinical illness.Keywords: Acute Inflammatory Demyelinating Polyneuropathy, electrophysiologic study, EMG NCV, Guillain Barre Syndrome
Procedia PDF Downloads 287738 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study
Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart
Abstract:
Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning
Procedia PDF Downloads 100737 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 95736 The Evaluation of the Cognitive Training Program for Older Adults with Mild Cognitive Impairment: Protocol of a Randomized Controlled Study
Authors: Hui-Ling Yang, Kuei-Ru Chou
Abstract:
Background: Studies show that cognitive training can effectively delay cognitive failure. However, there are several gaps in the previous studies of cognitive training in mild cognitive impairment: 1) previous studies enrolled mostly healthy older adults, with few recruiting older adults with cognitive impairment; 2) they also had limited generalizability and lacked long-term follow-up data and measurements of the activities of daily living functional impact. Moreover, only 37% were randomized controlled trials (RCT). 3) Limited cognitive training has been specifically developed for mild cognitive impairment. Objective: This study sought to investigate the changes in cognitive function, activities of daily living and degree of depressive symptoms in older adults with mild cognitive impairment after cognitive training. Methods: This double-blind randomized controlled study has a 2-arm parallel group design. Study subjects are older adults diagnosed with mild cognitive impairment in residential care facilities. 124 subjects will be randomized by the permuted block randomization, into intervention group (Cognitive training, CT), or active control group (Passive information activities, PIA). Therapeutic adherence, sample attrition rate, medication compliance and adverse events will be monitored during the study period, and missing data analyzed using intent-to-treat analysis (ITT). Results: Training sessions of the CT group are 45 minutes/day, 3 days/week, for 12 weeks (36 sessions each). The training of active control group is the same as CT group (45min/day, 3days/week, for 12 weeks, for a total of 36 sessions). The primary outcome is cognitive function, using the Mini-Mental Status Examination (MMSE); the secondary outcome indicators are: 1) activities of daily living, using the Lawton’s Instrumental Activities of Daily Living (IADLs) and 2) degree of depressive symptoms, using the Geriatric Depression Scale-Short form (GDS-SF). Latent growth curve modeling will be used in the repeated measures statistical analysis to estimate the trajectory of improvement by examining the rate and pattern of change in cognitive functions, activities of daily living and degree of depressive symptoms for intervention efficacy over time, and the effects will be evaluated immediate post-test, 3 months, 6 months and one year after the last session. Conclusions: We constructed a rigorous CT program adhering to the Consolidated Standards of Reporting Trials (CONSORT) reporting guidelines. We expect to determine the improvement in cognitive function, activities of daily living and degree of depressive symptoms of older adults with mild cognitive impairment after using the CT.Keywords: mild cognitive impairment, cognitive training, randomized controlled study
Procedia PDF Downloads 448735 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova
Abstract:
The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.Keywords: bacteriocins, cross-contamination, mathematical model, temperature
Procedia PDF Downloads 144734 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 102733 Dynamic Behavior of the Nanostructure of Load-Bearing Biological Materials
Authors: Mahan Qwamizadeh, Kun Zhou, Zuoqi Zhang, Yong Wei Zhang
Abstract:
Typical load-bearing biological materials like bone, mineralized tendon and shell, are biocomposites made from both organic (collagen) and inorganic (biomineral) materials. This amazing class of materials with intrinsic internally designed hierarchical structures show superior mechanical properties with regard to their weak components from which they are formed. Extensive investigations concentrating on static loading conditions have been done to study the biological materials failure. However, most of the damage and failure mechanisms in load-bearing biological materials will occur whenever their structures are exposed to dynamic loading conditions. The main question needed to be answered here is: What is the relation between the layout and architecture of the load-bearing biological materials and their dynamic behavior? In this work, a staggered model has been developed based on the structure of natural materials at nanoscale and Finite Element Analysis (FEA) has been used to study the dynamic behavior of the structure of load-bearing biological materials to answer why the staggered arrangement has been selected by nature to make the nanocomposite structure of most of the biological materials. The results showed that the staggered structures will efficiently attenuate the stress wave rather than the layered structure. Furthermore, such staggered architecture is effectively in charge of utilizing the capacity of the biostructure to resist both normal and shear loads. In this work, the geometrical parameters of the model like the thickness and aspect ratio of the mineral inclusions selected from the typical range of the experimentally observed feature sizes and layout dimensions of the biological materials such as bone and mineralized tendon. Furthermore, the numerical results validated with existing theoretical solutions. Findings of the present work emphasize on the significant effects of dynamic behavior on the natural evolution of load-bearing biological materials and can help scientists to design bioinspired materials in the laboratories.Keywords: load-bearing biological materials, nanostructure, staggered structure, stress wave decay
Procedia PDF Downloads 458732 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections
Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette
Abstract:
A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation
Procedia PDF Downloads 86731 Effects of Different Thermal Processing Routes and Their Parameters on the Formation of Voids in PA6 Bonded Aluminum Joints
Authors: Muhammad Irfan, Guillermo Requena, Jan Haubrich
Abstract:
Adhesively bonded aluminum joints are common in automotive and aircraft industries and are one of the enablers of lightweight construction to minimize the carbon emissions during transportation for a sustainable life. This study is focused on the effects of two thermal processing routes, i.e., by direct and induction heating, and their parameters on void formation in PA6 bonded aluminum EN-AW6082 joints. The joints were characterized microanalytically as well as by lap shear experiments. The aging resistance of the joints was studied by accelerated aging tests at 80°C hot water. It was found that the processing of single lap joints by direct heating in a convection oven causes the formation of a large number of voids in the bond line. The formation of voids in the convection oven was due to longer processing times and was independent of any surface pretreatments of the metal as well as the processing temperature. However, when processing at low temperatures, a large number of small-sized voids were observed under the optical microscope, and they were larger in size but reduced in numbers at higher temperatures. An induction heating process was developed, which not only successfully reduced or eliminated the voids in PA6 bonded joints but also reduced the processing times for joining significantly. Consistent with the trend in direct heating, longer processing times and higher temperatures in induction heating also led to an increased formation of voids in the bond line. Subsequent single lap shear tests revealed that the increasing void contents led to a 21% reduction in lap shear strengths (i.e., from ~47 MPa for induction heating to ~37 MPa for direct heating). Also, there was a 17% reduction in lap shear strengths when the consolidation temperature was raised from 220˚C to 300˚C during induction heating. However, below a certain threshold of void contents, there was no observable effect on the lap shear strengths as well as on hydrothermal aging resistance of the joints consolidated by the induction heating process.Keywords: adhesive, aluminium, convection oven, induction heating, mechanical properties, nylon6 (PA6), pretreatment, void
Procedia PDF Downloads 122730 Characterization of Lahar Sands for Reclamation Projects in the Manila Bay, Philippines
Authors: Julian Sandoval, Philipp Schober
Abstract:
Lahar sand (lahars) is a material that originates from volcanic debris flows. During and after a volcano eruption, the lahars can move at speeds up to 22 meters per hour or more, so they can easily cover extensive areas and destroy any structure in their path. Mount Pinatubo eruption (1991) brought lahars to its vicinities, and its use has been a matter of research ever since. Lahars are often disposed of for land reclamation projects in the Manila Bay, Philippines. After reclamation, some deep loss deposits may still present and they are prone to liquefaction. To mitigate the risk of liquefaction of such deposits, Vibro compaction has been proposed and used as a ground improvement technique. Cone penetration testing (CPT) campaigns are usually initiated to monitor the effectiveness of the ground improvement works by vibro compaction. The CPT cone resistance is used to analyses the in-situ relative density of the reclaimed sand before and after compaction. Available correlations between the CPT cone resistance and the relative density are only valid for non-crushable sands. Due to the partially crushable nature of lahars, the CPT data requires to be adjusted to allow for a correct interpretation of the CPT data. The objective of this paper is to characterize the chemical and mechanical properties of the lahar sands used for an ongoing project in the Port of Manila, which comprises reclamation activities using lahars from the east of Mount Pinatubo, it investigates their effect in the proposed correction factor. Additionally, numerous CPTs were carried out in a test trial and during the execution of the project. Based on this data, the influence of the grid spacing, compaction steps and the holding time on the compaction results are analyzed. Moreover, the so-called “aging effect” of the lahars is studied by comparing the results of the CPT testing campaign at different times after the vibro compaction activities. A considerable increase in the tip resistance of the CPT was observed over time.Keywords: vibro compaction, CPT, lahar sands, correction factor, chemical composition
Procedia PDF Downloads 232729 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 109728 Effect of Foot Reflexology Treatment on Arterial Blood Gases among Mechanically Ventilated Patients
Authors: Maha Salah Abdullah Ismail, Manal S. Ismail, Amir M. Saleh
Abstract:
Reflexology treatment is a method for enhancing body relaxation. It is a widely recognized as an alternative therapy, effective for many health conditions. This study aimed to evaluate the effect of reflexology treatment on arterial blood gases among mechanically ventilated patients. A quasi-experimental (pre and post-test) research design was used. Research hypothesis was mechanically ventilated patients who will receive the reflexology treatment will have improvement in their arterial blood gases than those who will not. The current study was carried out in different Intensive Care Units at the Cairo University Hospitals. A purposeful sample of 100 adults’ mechanically ventilated patients was recruited over a period of three months of data collection. The participants were divided into two equally matched groups; (1) The study group who has received the routine care, in addition, two reflexology sessions on the feet, (2) The control group who has received only the routine care. One tool was utilized to collect data pertinent to the study; mechanically ventilated patients' data sheet that consists of demographic and medical data. Result: Majority (58% of the study group and 82% of the control group) were males, with mean age of 50.9 years in both groups. Patients who received the reflexology treatment significantly increase in the oxygen saturation pre second session (t=5.15, p=.000), immediate post sessions (t=4.4, p=.000) and post two hours (t= 4.7, p= .000). The study group was more likely to have lower PaO2 (F=5.025, p=.015), PaCo2 (F=4.952, p=.025) and higher HCo3 (F=15.211, p=.000) than the control group. Conclusion: This study results support the positive effect of reflexology treatment in improving some arterial blood gases among mechanically ventilated patients’ with the conventional therapy as in the study group there was increase in the oxygen saturation. In differences between groups there decrease PaO2, PaCo2 and increase HCo3 in the study group. Recommendation: Nurses should be trained how to demonstrate the foot reflexology among mechanically ventilated patients.Keywords: arterial blood gases, foot, mechanical ventilated patient, reflexology
Procedia PDF Downloads 208727 Portuguese Guitar Strings Characterization and Comparison
Authors: P. Serrão, E. Costa, A. Ribeiro, V. Infante
Abstract:
The characteristic sonority of the Portuguese guitar is in great part what makes Fado so distinguishable from other traditional song styles. The Portuguese guitar is a pear-shaped plucked chordophone with six courses of double strings. This study compares the two types of plain strings available for Portuguese guitar and used by the musicians. One is stainless steel spring wire, the other is high carbon spring steel (music wire). Some musicians mention noticeable differences in sound quality between these two string materials, such as a little more brightness and sustain in the steel strings. Experimental tests were performed to characterize string tension at pitch; mechanical strength and tuning stability using the universal testing machine; dimensional control and chemical composition analysis using the scanning electron microscope. The string dynamical behaviour characterization experiments, including frequency response, inharmonicity, transient response, damping phenomena and were made in a monochord test set-up designed and built in-house. Damping factor was determined for the fundamental frequency. As musicians are able to detect very small damping differences, an accurate a characterization of the damping phenomena for all harmonics was necessary. With that purpose, another improved monochord was set and a new system identification methodology applied. Due to the complexity of this task several adjustments were necessary until obtaining good experimental data. In a few cases, dynamical tests were repeated to detect any evolution in damping parameters after break-in period when according to players experience a new string sounds gradually less dull until reaching the typically brilliant timbre. Finally, each set of strings was played on one guitar by a distinguished player and recorded. The recordings which include individual notes, scales, chords and a study piece, will be analysed to potentially characterize timbre variations.Keywords: damping factor, music wire, portuguese guitar, string dynamics
Procedia PDF Downloads 553726 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy
Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury
Abstract:
Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling
Procedia PDF Downloads 73725 Acoustic Emission for Tool-Chip Interface Monitoring during Orthogonal Cutting
Authors: D. O. Ramadan, R. S. Dwyer-Joyce
Abstract:
The measurement of the interface conditions in a cutting tool contact is essential information for performance monitoring and control. This interface provides the path for the heat flux to the cutting tool. This elevate in the cutting tool temperature leads to motivate the mechanism of tool wear, thus affect the life of the cutting tool and the productivity. This zone is representative by the tool-chip interface. Therefore, understanding and monitoring this interface is considered an important issue in machining. In this paper, an acoustic emission (AE) technique was used to find the correlation between AE parameters and the tool-chip interface. For this reason, a response surface design (RSD) has been used to analyse and optimize the machining parameters. The experiment design was based on the face centered, central composite design (CCD) in the Minitab environment. According to this design, a series of orthogonal cutting experiments for different cutting conditions were conducted on a Triumph 2500 lathe machine to study the sensitivity of the acoustic emission (AE) signal to change in tool-chip contact length. The cutting parameters investigated were the cutting speed, depth of cut, and feed and the experiments were performed for 6082-T6 aluminium tube. All the orthogonal cutting experiments were conducted unlubricated. The tool-chip contact area was investigated using a scanning electron microscope (SEM). The results obtained in this paper indicate that there is a strong dependence of the root mean square (RMS) on the cutting speed, where the RMS increases with increasing the cutting speed. A dependence on the tool-chip contact length has been also observed. However there was no effect observed of changing the cutting depth and feed on the RMS. These dependencies have been clarified in terms of the strain and temperature in the primary and secondary shear zones, also the tool-chip sticking and sliding phenomenon and the effect of these mechanical variables on dislocation activity at high strain rates. In conclusion, the acoustic emission technique has the potential to monitor in situ the tool-chip interface in turning and consequently could indicate the approaching end of life of a cutting tool.Keywords: Acoustic emission, tool-chip interface, orthogonal cutting, monitoring
Procedia PDF Downloads 487724 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217723 An Investigation on the Pulse Electrodeposition of Ni-TiO2/TiO2 Multilayer Structures
Authors: S. Mohajeri
Abstract:
Electrocodeposition of Ni-TiO2 nanocomposite single layers and Ni-TiO2/TiO2 multilayers from Watts bath containing TiO2 sol was carried out on copper substrate. Pulse plating and pulse reverse plating techniques were applied to facilitate higher incorporations of TiO2 nanoparticles in Ni-TiO2 nanocomposite single layers, and the results revealed that by prolongation of the current-off durations and the anodic cycles, deposits containing 11.58 wt.% and 13.16 wt.% TiO2 were produced, respectively. Multilayer coatings which consisted of Ni-TiO2 and TiO2-rich layers were deposited by pulse potential deposition through limiting the nickel deposition by diffusion control mechanism. The TiO2-rich layers thickness and accordingly, the content of TiO2 reinforcement reached 104 nm and 18.47 wt.%, respectively in the optimum condition. The phase structure and surface morphology of the nanocomposite coatings were characterized by X-ray diffraction (XRD) and scanning electron microscopy (SEM). The cross sectional morphology and line scans of the layers were studied by field emission scanning electron microscopy (FESEM). It was confirmed that the preferred orientations and the crystallite sizes of nickel matrix were influenced by the deposition technique parameters, and higher contents of codeposited TiO2 nanoparticles refined the microstructure. The corrosion behavior of the coatings in 1M NaCl and 0.5M H2SO4 electrolytes were compared by means of potentiodynamic polarization and electrochemical impedance spectroscopy (EIS) techniques. Increase of corrosion resistance and the passivation tendency were favored by TiO2 incorporation, while the degree of passivation declined as embedded particles disturbed the continuity of passive layer. The role of TiO2 incorporation on the improvement of mechanical properties including hardness, elasticity, scratch resistance and friction coefficient was investigated by the means of atomic force microscopy (AFM). Hydrophilicity and wettability of the composite coatings were investigated under UV illumination, and the water contact angle of the multilayer was reduced to 7.23° after 1 hour of UV irradiation.Keywords: electrodeposition, hydrophilicity, multilayer, pulse-plating
Procedia PDF Downloads 249722 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 128721 A Way to Recognize Origin of Soil Conditioners
Authors: Laura Santagostini, Vittoria Guglielmi
Abstract:
The meaning of the word 'Nature' (literally 'that which is about to be born') has accompanied researchers throughout their study of the environment and has led to the design of technical means to improve the properties of the soil, modifying its structure and/or consistency, thus favouring the emergence and growth of plants. These include soil improvers, i.e. any substance, natural or synthetic, mineral or organic, capable of modifying and improving the chemical, physical, biological and mechanical properties and characteristics of the soil. In particular, GCSCs (Green Composted Soil Conditioners) are soil conditioners produced through a controlled process of transforming selected organic green waste materials, such as clippings from the maintenance of ornamental greenery, crop residues and other plant waste. The use of GCSC in horticulture, fruit growing, industrial cultivation and nursery gardening is an active way to return organic carbon to the soil, thus limiting CO2 emissions and the production of greenhouse gases, and also to limit the environmental impact of peat extraction, which is normally used in these areas of application. With a view to distinguish between GCSC and peats and to assess what further contributions GCSC can provide to the soil and growing plants, we studied the behaviour of the two substrates by chromatographic techniques. After treating the individual soil improvers with different solvents, used individually or by applying a polarity gradient, the extracts obtained were analysed by HPLC and LCMS in order to assess their composition mainly from a qualitative point of view. Data obtained show in GCSC the presence of polyphenolic derivatives attributable to the degradation of plant material and potentially useful for the development and growth of young plants, while commercial peat-based products only sporadically showed the presence of recognisable molecules, confirming the lower complexity of the matrix under analysis. These results allowed us to distinguish the two different types of soil conditioner based on their chromatographic profiles.Keywords: chromatographic profile, HPLC, polyphenols, soil conditioners
Procedia PDF Downloads 124720 Shear Strength Parameters of an Unsaturated Lateritic Soil
Authors: Jeferson Brito Fernades, Breno Padovezi Rocha, Roger Augusto Rodrigues, Heraldo Luiz Giacheti
Abstract:
The geotechnical projects demand the appropriate knowledge of soil characteristics and parameters. The determination of geotechnical soil parameters can be done by means of laboratory or in situ tests. In countries with tropical weather, like Brazil, unsaturated soils are very usual. In these soils, the soil suction has been recognized as an important stress state variable, which commands the geo-mechanical behavior. Triaxial and direct shear tests on saturated soils samples allow determine only the minimal soil shear strength, in other words, no suction contribution. This paper briefly describes the triaxial test with controlled suction as well as discusses the influence of suction on the shear strength parameters of a lateritic tropical sandy soil from a Brazilian research site. In this site, a sample pit was excavated to retrieve disturbed and undisturbed soil blocks. The samples extracted from these blocks were tested in laboratory to represent the soil from 1.5, 3.0 and 5.0 m depth. The stress curves and shear strength envelopes determined by triaxial tests varying suction and confining pressure are presented and discussed. The water retention characteristics on this soil complement this analysis. In situ CPT tests were also carried out at this site in different seasons of the year. In this case, the soil suction profile was determined by means of the soil water retention. This extra information allowed assessing how soil suction also affected the CPT data and the shear strength parameters estimative via correlation. The major conclusions of this paper are: the undisturbed soil samples contracted before shearing and the soil shear strength increased hyperbolically with suction; and it was possible to assess how soil suction also influenced CPT test data based on the water content soil profile as well as the water retention curve. This study contributed with a better understanding of the shear strength parameters and the soil variability of a typical unsaturated tropical soil.Keywords: site characterization, triaxial test, CPT, suction, variability
Procedia PDF Downloads 416719 Synthesis and Characterization of Graphene Composites with Application for Sustainable Energy
Authors: Daniel F. Sava, Anton Ficai, Bogdan S. Vasile, Georgeta Voicu, Ecaterina Andronescu
Abstract:
The energy crisis and environmental contamination are very serious problems, therefore searching for better and sustainable renewable energy is a must. It is predicted that the global energy demand will double until 2050. Solar water splitting and photocatalysis are considered as one of the solutions to these issues. The use of oxide semiconductors for solar water splitting and photocatalysis started in 1972 with the experiments of Fujishima and Honda on TiO2 electrodes. Since then, the evolution of nanoscience and characterization methods leads to a better control of size, shape and properties of materials. Although the past decade advancements are astonishing, for these applications the properties have to be controlled at a much finer level, allowing the control of charge-carrier lives, energy level positions, charge trapping centers, etc. Graphene has attracted a lot of attention, since its discovery in 2004, due to the excellent electrical, optical, mechanical and thermal properties that it possesses. These properties make it an ideal support for photocatalysts, thus graphene composites with oxide semiconductors are of great interest. We present in this work the synthesis and characterization of graphene-related materials and oxide semiconductors and their different composites. These materials can be used in constructing devices for different applications (batteries, water splitting devices, solar cells, etc), thus showing their application flexibility. The synthesized materials are different morphologies and sizes of TiO2, ZnO and Fe2O3 that are obtained through hydrothermal, sol-gel methods and graphene oxide which is synthesized through a modified Hummer method and reduced with different agents. Graphene oxide and the reduced form could also be used as a single material for transparent conductive films. The obtained single materials and composites were characterized through several methods: XRD, SEM, TEM, IR spectroscopy, RAMAN, XPS and BET adsorption/desorption isotherms. From the results, we see the variation of the properties with the variation of synthesis parameters, size and morphology of the particles.Keywords: composites, graphene, hydrothermal, renewable energy
Procedia PDF Downloads 498718 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa
Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu
Abstract:
After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration
Procedia PDF Downloads 85717 Novel EGFR Ectodomain Mutations and Resistance to Anti-EGFR and Radiation Therapy in H&N Cancer
Authors: Markus Bredel, Sindhu Nair, Hoa Q. Trummell, Rajani Rajbhandari, Christopher D. Willey, Lewis Z. Shi, Zhuo Zhang, William J. Placzek, James A. Bonner
Abstract:
Purpose: EGFR-targeted monoclonal antibodies (mAbs) provide clinical benefit in some patients with H&N squamous cell carcinoma (HNSCC), but others progress with minimal response. Missense mutations in the EGFR ectodomain (ECD) can be acquired under mAb therapy by mimicking the effect of large deletions on receptor untethering and activation. Little is known about the contribution of EGFR ECD mutations to EGFR activation and anti-EGFR response in HNSCC. Methods: We selected patient-derived HNSCC cells (UM-SCC-1) for resistance to mAb Cetuximab (CTX) by repeated, stepwise exposure to mimic what may occur clinically and identified two concurrent EGFR ECD mutations (UM-SCC-1R). We examined the competence of the mutants to bind EGF ligand or CTX. We assessed the potential impact of the mutations through visual analysis of space-filling models of the native sidechains in the original structures vs. their respective side-chain mutations. We performed CRISPR in combination with site-directed mutagenesis to test for the effect of the mutants on ligand-independent EGFR activation and sorting. We determined the effects on receptor internalization, endocytosis, downstream signaling, and radiation sensitivity. Results: UM-SCC-1R cells carried two non-synonymous missense mutations (G33S and N56K) mapping to domain I in or near the EGF binding pocket of the EGFR ECD. Structural modeling predicted that these mutants restrict the adoption of a tethered, inactive EGFR conformation while not permitting association of EGFR with the EGF ligand or CTX. Binding studies confirmed that the mutant, untethered receptor displayed a reduced affinity for both EGF and CTX but demonstrated sustained activation and presence at the cell surface with diminished internalization and sorting for endosomal degradation. Single and double-mutant models demonstrated that the G33S mutant is dominant over the N56K mutant in its effect on EGFR activation and EGF binding. CTX-resistant UM-SCC-1R cells demonstrated cross-resistance to mAb Panitumuab but, paradoxically, remained sensitive to the reversible receptor tyrosine kinase inhibitor Erlotinib. Conclusions: HNSCC cells can select for EGFR ECD mutations under EGFR mAb exposure that converge to trap the receptor in an open, constitutively activated state. These mutants impede the receptor’s competence to bind mAbs and EGF ligand and alter its endosomal trafficking, possibly explaining certain cases of clinical mAb and radiation resistance.Keywords: head and neck cancer, EGFR mutation, resistance, cetuximab
Procedia PDF Downloads 92716 Optimizing Sustainable Graphene Production: Extraction of Graphite from Spent Primary and Secondary Batteries for Advanced Material Synthesis
Authors: Pratima Kumari, Sukha Ranjan Samadder
Abstract:
This research aims to contribute to the sustainable production of graphene materials by exploring the extraction of graphite from spent primary and secondary batteries. The increasing demand for graphene materials, a versatile and high-performance material, necessitates environmentally friendly methods for its synthesis. The process involves a well-planned methodology, beginning with the gathering and categorization of batteries, followed by the disassembly and careful removal of graphite from anode structures. The use of environmentally friendly solvents and mechanical techniques ensures an efficient and eco-friendly extraction of graphite. Advanced approaches such as the modified Hummers' method and chemical reduction process are utilized for the synthesis of graphene materials, with a focus on optimizing parameters. Various analytical techniques such as Fourier-transform infrared spectroscopy, X-ray diffraction, scanning electron microscopy, thermogravimetric analysis, and Raman spectroscopy were employed to validate the quality and structure of the produced graphene materials. The major findings of this study reveal the successful implementation of the methodology, leading to the production of high-quality graphene materials suitable for advanced material applications. Thorough characterization using various advanced techniques validates the structural integrity and purity of the graphene. The economic viability of the process is demonstrated through a comprehensive economic analysis, highlighting the potential for large-scale production. This research contributes to the field of sustainable production of graphene materials by offering a systematic methodology that efficiently transforms spent batteries into valuable graphene resources. Furthermore, the findings not only showcase the potential for upcycling electronic waste but also address the pressing need for environmentally conscious processes in advanced material synthesis.Keywords: spent primary batteries, spent secondary batteries, graphite extraction, advanced material synthesis, circular economy approach
Procedia PDF Downloads 54715 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69714 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 238