Search results for: model-based system’s engineering (MBSE)
192 Investigation of Polypropylene Composite Films With Carbon Nanotubes and the Role of β Nucleating Agents for the Improvement of Their Water Vapor Permeability
Authors: Glykeria A. Visvini, George N. Mathioudakis, Amaia Soto Beobide, Aris E. Giannakas, George A. Voyiatzis
Abstract:
Polymeric nanocomposites have generated considerable interest in both academic research and industry because their properties can be tailored by adjusting the type & concentration of nano-inclusions, resulting in complementary and adaptable characteristics. The exceptional and/or unique properties of the nanocomposites, including the high mechanical strength and stiffness, the ease of processing, and their lightweight nature, are attributed to the high surface area, the electrical and/or thermal conductivity of the nano-fillers, which make them appealing materials for a wide range of engineering applications. Polymeric «breathable» membranes enabling water vapor permeability (WVP) can be designed either by using micro/nano-fillers with the ability to interrupt the continuity of the polymer phase generating micro/nano-porous structures or/and by creating micro/nano-pores into the composite material by uniaxial/biaxial stretching. Among the nanofillers, carbon nanotubes (CNTs) exhibit particular high WVP and for this reason, they have already been proposed for gas separation membranes. In a similar context, they could prove to be promising alternative/complementary filler nano-materials, for the development of "breathable" products. Polypropylene (PP) is a commonly utilized thermoplastic polymer matrix in the development of composite films, due to its easy processability and low price, combined with its good chemical & physical properties. PP is known to present several crystalline phases (α, β and γ), depending on the applied treatment process, which have a significant impact on its final properties, particularly in terms of WVP. Specifically, the development of the β-phase in PP in combination with stretching is anticipated to modify the crystalline behavior and extend the microporosity of the polymer matrix exhibiting enhanced WVP. The primary objective of this study is to develop breathable nano-carbon based (functionalized MWCNTs) PP composite membranes, potentially also avoiding the stretching process. This proposed alternative is expected to have a better performance/cost ratio over current stretched PP/CaCO3 composite benchmark membranes. The focus is to investigate the impact of both β-nucleator(s) and nano-carbon fillers on water vapor transmission rate properties of relevant PP nanocomposites.Keywords: carbon nanotubes, nanocomposites, nucleating agents, polypropylene, water vapor permeability
Procedia PDF Downloads 74191 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'
Authors: Anthony Coogan
Abstract:
Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle
Procedia PDF Downloads 200190 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 136189 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings
Authors: Youlu Huang, Huanjun Jiang
Abstract:
A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.Keywords: old buildings, adding story, seismic strengthening, seismic performance
Procedia PDF Downloads 123188 Micromechanism of Ionization Effects on Metal/Gas Mixing Instabilty at Extreme Shock Compressing Conditions
Authors: Shenghong Huang, Weirong Wang, Xisheng Luo, Xinzhu Li, Xinwen Zhao
Abstract:
Understanding of material mixing induced by Richtmyer-Meshkov instability (RMI) at extreme shock compressing conditions (high energy density environment: P >> 100GPa, T >> 10000k) is of great significance in engineering and science, such as inertial confinement fusion(ICF), supersonic combustion, etc. Turbulent mixing induced by RMI is a kind of complex fluid dynamics, which is closely related with hydrodynamic conditions, thermodynamic states, material physical properties such as compressibility, strength, surface tension and viscosity, etc. as well as initial perturbation on interface. For phenomena in ordinary thermodynamic conditions (low energy density environment), many investigations have been conducted and many progresses have been reported, while for mixing in extreme thermodynamic conditions, the evolution may be very different due to ionization as well as large difference of material physical properties, which is full of scientific problems and academic interests. In this investigation, the first principle based molecular dynamic method is applied to study metal Lithium and gas Hydrogen (Li-H2) interface mixing in micro/meso scale regime at different shock compressing loading speed ranging from 3 km/s to 30 km/s. It's found that, 1) Different from low-speed shock compressing cases, in high-speed shock compresing (>9km/s) cases, a strong acceleration of metal/gas interface after strong shock compression is observed numerically, leading to a strong phase inverse and spike growing with a relative larger linear rate. And more specially, the spike growing rate is observed to be increased with shock loading speed, presenting large discrepancy with available empirical RMI models; 2) Ionization is happened in shock font zone at high-speed loading cases(>9km/s). An additional local electric field induced by the inhomogeneous diffusion of electrons and nuclei after shock font is observed to occur near the metal/gas interface, leading to a large acceleration of nuclei in this zone; 3) In conclusion, the work of additional electric field contributes to a mechanism of RMI in micro/meso scale regime at extreme shock compressing conditions, i.e., a Rayleigh-Taylor instability(RTI) is induced by additional electric field during RMI mixing process and thus a larger linear growing rate of interface spike.Keywords: ionization, micro/meso scale, material mixing, shock
Procedia PDF Downloads 231187 Concrete Compressive Strengths of Major Existing Buildings in Kuwait
Authors: Zafer Sakka, Husain Al-Khaiat
Abstract:
Due to social and economic considerations, owners all over the world desire to keep and use existing structures, including aging ones. However, these structures, especially those that are dear, need accurate condition assessment, and proper safety evaluation. More than half of the budget spent on construction activities in developed countries is related to the repair and maintenance of these reinforced concrete (R/C) structures. Also, periodical evaluation and assessment of relatively old concrete structures are vital and imperative. If the evaluation and assessment of structural components of a particular aging R/C structure reveal that repairs are essential for these components, these repairs should not be delayed. Delaying the repairs has the potential of losing serviceability of the whole structure and/or causing total failure and collapse of the structure. In addition, if repairs are delayed, the cost of maintenance will skyrocket as well. It can also be concluded from the above that the assessment of existing needs to receive more consideration and thought from the structural engineering societies and professionals. Ten major existing structures in Kuwait city that were constructed in the 1970s were assessed for structural reliability and integrity. Numerous concrete samples were extracted from the structural systems of the investigated buildings. This paper presents the results of the compressive strength tests that were conducted on the extracted cores. The results are compared for the buildings’ columns and beams elements and compared with the design strengths. The collected data were statistically analyzed. The average compressive strengths of the concrete cores that were extracted from the ten buildings had a large variation. The lowest average compressive strength for one of the buildings was 158 kg/cm². This building was deemed unsafe and economically unfeasible to be repaired; accordingly, it was demolished. The other buildings had an average compressive strengths fall in the range 215-317 kg/cm². Poor construction practices were the main cause for the strengths. Although most of the drawings and information for these buildings were lost during the invasion of Kuwait in 1990, however, information gathered indicated that the design strengths of the beams and columns for most of these buildings were in the range of 280-400 kg/cm². Following the study, measures were taken to rehabilitate the buildings for safety. The mean compressive strength for all cores taken from beams and columns of the ten buildings was 256.7 kg/cm². The values range was 139 to 394 kg/cm². For columns, the mean was 250.4 kg/cm², and the values ranged from 137 to 394 kg/cm². However, the mean compressive strength for the beams was higher than that of columns. It was 285.9 kg/cm², and the range was 181 to 383 kg/cm². In addition to the concrete cores that were extracted from the ten buildings, the 28-day compressive strengths of more than 24,660 concrete cubes were collected from a major ready-mixed concrete supplier in Kuwait. The data represented four different grades of ready-mix concrete (250, 300, 350, and 400 kg/cm²) manufactured between the year 2003 and 2018. The average concrete compressive strength for the different concrete grades (250, 300, 350 and 400 kg/cm²) was found to be 318, 382, 453 and 504 kg/cm², respectively, and the coefficients of variations were found to be 0.138, 0.140, 0.157 and 0.131, respectively.Keywords: concrete compressive strength, concrete structures, existing building, statistical analysis.
Procedia PDF Downloads 116186 Developing Gifted Students’ STEM Career Interest
Authors: Wing Mui Winnie So, Tian Luo, Zeyu Han
Abstract:
To fully explore and develop the potentials of gifted students systematically and strategically by providing them with opportunities to receive education at appropriate levels, schools in Hong Kong are encouraged to adopt the "Three-Tier Implementation Model" to plan and implement the school-based gifted education, with Level Three refers to the provision of learning opportunities for the exceptionally gifted students in the form of specialist training outside the school setting by post-secondary institutions, non-government organisations, professional bodies and technology enterprises. Due to the growing concern worldwide about low interest among students in pursuing STEM (Science, Technology, Engineering, and Mathematics) careers, cultivating and boosting STEM career interest has been an emerging research focus worldwide. Although numerous studies have explored its critical contributors, little research has examined the effectiveness of comprehensive interventions such as “Studying with STEM professional”. This study aims to examine the effect on gifted students’ career interest during their participation in an off-school support programme designed and supervised by a team of STEM educators and STEM professionals from a university. Gifted students were provided opportunities and tasks to experience STEM career topics that are not included in the school syllabus, and to experience how to think and work like a STEM professional in their learning. Participants involved 40 primary school students joining the intervention programme outside the normal school setting. Research methods included adopting the STEM career interest survey and drawing tasks supplemented with writing before and after the programme, as well as interviews before the end of the programme. The semi-structured interviews focused on students’ views regarding STEM professionals; what’s it like to learn with a STEM professional; what’s it like to work and think like a STEM professional; and students’ STEM identity and career interest. The changes in gifted students’ STEM career interest and its well-recognised significant contributors, for example, STEM stereotypes, self-efficacy for STEM activities, and STEM outcome expectation, were collectively examined from the pre- and post-survey using T-test. Thematic analysis was conducted for the interview records to explore how studying with STEM professional intervention can help students understand STEM careers; build STEM identity; as well as how to think and work like a STEM professional. Results indicated a significant difference in STEM career interest before and after the intervention. The influencing mechanism was also identified from the measurement of the related contributors and the analysis of drawings and interviews. The potential of off-school support programme supervised by STEM educators and professionals to develop gifted students’ STEM career interest is argued to be further unleashed in future research and practice.Keywords: gifted students, STEM career, STEM education, STEM professionals
Procedia PDF Downloads 76185 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 74184 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method
Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren
Abstract:
In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.Keywords: floating body, fluid structure interaction, MPS, particle method, waves
Procedia PDF Downloads 76183 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology
Authors: Tatsuhiko Aizawa, Hiroshi Morita
Abstract:
The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch
Procedia PDF Downloads 89182 Learning Instructional Managements between the Problem-Based Learning and Stem Education Methods for Enhancing Students Learning Achievements and their Science Attitudes toward Physics the 12th Grade Level
Authors: Achirawatt Tungsombatsanti, Toansakul Santiboon, Kamon Ponkham
Abstract:
Strategies of the STEM education was aimed to prepare of an interdisciplinary and applied approach for the instructional of science, technology, engineering, and mathematics in an integrated students for enhancing engagement of their science skills to the Problem-Based Learning (PBL) method in Borabu School with a sample consists of 80 students in 2 classes at the 12th grade level of their learning achievements on electromagnetic issue. Research administrations were to separate on two different instructional model groups, the 40-experimental group was designed with the STEM instructional experimenting preparation and induction in a 40-student class and the controlling group using the PBL was designed to students identify what they already know, what they need to know, and how and where to access new information that may lead to the resolution of the problem in other class. The learning environment perceptions were obtained using the 35-item Physics Laboratory Environment Inventory (PLEI). Students’ creating attitude skills’ sustainable development toward physics were assessed with the Test Of Physics-Related Attitude (TOPRA) The term scaling was applied to the attempts to measure the attitude objectively with the TOPRA was used to assess students’ perceptions of their science attitude toward physics. Comparisons between pretest and posttest techniques were assessed students’ learning achievements on each their outcomes from each instructional model, differently. The results of these findings revealed that the efficiency of the PLB and the STEM based on criteria indicate that are higher than the standard level of the 80/80. Statistically, significant of students’ learning achievements to their later outcomes on the controlling and experimental physics class groups with the PLB and the STEM instructional designs were differentiated between groups at the .05 level, evidently. Comparisons between the averages mean scores of students’ responses to their instructional activities in the STEM education method are higher than the average mean scores of the PLB model. Associations between students’ perceptions of their physics classes to their attitudes toward physics, the predictive efficiency R2 values indicate that 77%, and 83% of the variances in students’ attitudes for the PLEI and the TOPRA in physics environment classes were attributable to their perceptions of their physics PLB and the STEM instructional design classes, consequently. An important of these findings was contributed to student understanding of scientific concepts, attitudes, and skills as evidence with STEM instructional ought to higher responding than PBL educational teaching. Statistically significant between students’ learning achievements were differentiated of pre and post assessments which overall on two instructional models.Keywords: learning instructional managements, problem-based learning, STEM education, method, enhancement, students learning achievements, science attitude, physics classes
Procedia PDF Downloads 230181 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage
Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik
Abstract:
The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis
Procedia PDF Downloads 320180 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy
Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu
Abstract:
Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films
Procedia PDF Downloads 255179 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 177178 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation
Authors: W. Meron Mebrahtu, R. Absi
Abstract:
Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.Keywords: accuracy, eddy viscosity, sewers, velocity profile
Procedia PDF Downloads 112177 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast
Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi
Abstract:
The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education
Procedia PDF Downloads 174176 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring
Procedia PDF Downloads 229175 Progress Toward More Resilient Infrastructures
Authors: Amir Golalipour
Abstract:
In recent years, resilience emerged as an important topic in transportation infrastructure practice, planning, and design to address the myriad stressors of future climate facing the Nation. Climate change has increased the frequency of extreme weather events and also causes climate and weather patterns to diverge from historic trends, culminating in circumstances where transportation infrastructure and assets are operating outside the scope of their design. To design and maintain transportation infrastructure that can continue meeting objectives over the infrastructure’s design life, these systems must be made adaptable to the changing climate by incorporating resilience wherever practically and financially feasible. This study is focused on the adaptation strategies and incorporation of resilience in infrastructure construction, maintenance, rehabilitation, and preservation processes. This study will include highlights from some of the recent FHWA activities on resilience. This study describes existing resilience planning and decision-making practices related to transportation infrastructure; mechanisms to identify, analyze, and prioritize adaptation options; and the strain that future climate and extreme weather event pressures place on existing transportation assets and the stressors these systems face for both single and combined stressor scenarios. Results of two case studies from Transportation Engineering Approaches to Climate Resiliency (TEACR) projects with focus on temperature and precipitation impacts on transportation infrastructures will be presented. These case studies looked at the impact of infrastructure performance using future temperature and precipitation compared to traditional climate design parameters. The research team used the adaptation decision making assessment and Coupled Model Intercomparison Project (CMIP) processing tool to determine which solution is best to pursue. The CMIP tool provided project climate data for temperature and precipitation which then could be incorporated into the design procedure to estimate the performance. As a result, using the future climate scenarios would impact the design. These changes were noted to have only a slight increase in costs, however it is acknowledged that network wide these costs could be significant. This study will also focus on what we have learned from recent storms, floods, and climate related events that will help us be better prepared to ensure our communities have a resilient transportation network. It should be highlighted that standardized mechanisms to incorporate resilience practices are required to encourage widespread implementation, mitigate the effects of climate stressors, and ensure the continuance of transportation systems and assets in an evolving climate.Keywords: adaptation strategies, extreme events, resilience, transportation infrastructure
Procedia PDF Downloads 9174 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition
Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova
Abstract:
This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy
Procedia PDF Downloads 51173 Ethicality of Algorithmic Pricing and Consumers’ Resistance
Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos
Abstract:
Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality
Procedia PDF Downloads 92172 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil
Authors: Ana Julia C. Kfouri
Abstract:
A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort
Procedia PDF Downloads 388171 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach
Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis
Abstract:
The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company
Procedia PDF Downloads 113170 Effect of Fuel Type on Design Parameters and Atomization Process for Pressure Swirl Atomizer and Dual Orifice Atomizer for High Bypass Turbofan Engine
Authors: Mohamed K. Khalil, Mohamed S. Ragab
Abstract:
Atomizers are used in many engineering applications including diesel engines, petrol engines and spray combustion in furnaces as well as gas turbine engines. These atomizers are used to increase the specific surface area of the fuel, which achieve a high rate of fuel mixing and evaporation. In all combustion systems reduction in mean drop size is a challenge which has many advantages since it leads to rapid and easier ignition, higher volumetric heat release rate, wider burning range and lower exhaust concentrations of the pollutant emissions. Pressure atomizers have a different configuration for design such as swirl atomizer (simplex), dual orifice, spill return, plain orifice, duplex and fan spray. Simplex pressure atomizers are the most common type of all. Among all types of atomizers, pressure swirl types resemble a special category since they differ in quality of atomization, the reliability of operation, simplicity of construction and low expenditure of energy. But, the disadvantages of these atomizers are that they require very high injection pressure and have low discharge coefficient owing to the fact that the air core covers the majority of the atomizer orifice. To overcome these problems, dual orifice atomizer was designed. This paper proposes a detailed mathematical model design procedure for both pressure swirl atomizer (Simplex) and dual orifice atomizer, examines the effects of varying fuel type and makes a clear comparison between the two types. Using five types of fuel (JP-5, JA1, JP-4, Diesel and Bio-Diesel) as a case study, reveal the effect of changing fuel type and its properties on atomizers design and spray characteristics. Which effect on combustion process parameters; Sauter Mean Diameter (SMD), spray cone angle and sheet thickness with varying the discharge coefficient from 0.27 to 0.35 during takeoff for high bypass turbofan engines. The spray atomizer performance of the pressure swirl fuel injector was compared to the dual orifice fuel injector at the same differential pressure and discharge coefficient using Excel. The results are analyzed and handled to form the final reliability results for fuel injectors in high bypass turbofan engines. The results show that the Sauter Mean Diameter (SMD) in dual orifice atomizer is larger than Sauter Mean Diameter (SMD) in pressure swirl atomizer, the film thickness (h) in dual orifice atomizer is less than the film thickness (h) in pressure swirl atomizer. The Spray Cone Angle (α) in pressure swirl atomizer is larger than Spray Cone Angle (α) in dual orifice atomizer.Keywords: gas turbine engines, atomization process, Sauter mean diameter, JP-5
Procedia PDF Downloads 167169 Risk Based Inspection and Proactive Maintenance for Civil and Structural Assets in Oil and Gas Plants
Authors: Mohammad Nazri Mustafa, Sh Norliza Sy Salim, Pedram Hatami Abdullah
Abstract:
Civil and structural assets normally have an average of more than 30 years of design life. Adding to this advantage, the assets are normally subjected to slow degradation process. Due to the fact that repair and strengthening work for these assets are normally not dependent on plant shut down, the maintenance and integrity restoration of these assets are mostly done based on “as required” and “run to failure” basis. However unlike other industries, the exposure in oil and gas environment is harsher as the result of corrosive soil and groundwater, chemical spill, frequent wetting and drying, icing and de-icing, steam and heat, etc. Due to this type of exposure and the increasing level of structural defects and rectification in line with the increasing age of plants, assets integrity assessment requires a more defined scope and procedures that needs to be based on risk and assets criticality. This leads to the establishment of risk based inspection and proactive maintenance procedure for civil and structural assets. To date there is hardly any procedure and guideline as far as integrity assessment and systematic inspection and maintenance of civil and structural assets (onshore) are concerned. Group Technical Solutions has developed a procedure and guideline that takes into consideration credible failure scenario, assets risk and criticality from process safety and structural engineering perspective, structural importance, modeling and analysis among others. Detailed inspection that includes destructive and non-destructive tests (DT & NDT) and structural monitoring is also being performed to quantify defects, assess severity and impact on integrity as well as identify the timeline for integrity restoration. Each defect and its credible failure scenario is assessed against the risk on people, environment, reputation and production loss. This technical paper is intended to share on the established procedure and guideline and their execution in oil & gas plants. In line with the overall roadmap, the procedure and guideline will form part of specialized solutions to increase production and to meet the “Operational Excellence” target while extending service life of civil and structural assets. As the result of implementation, the management of civil and structural assets is now more systematically done and the “fire-fighting” mode of maintenance is being gradually phased out and replaced by a proactive and preventive approach. This technical paper will also set the criteria and pose the challenge to the industry for innovative repair and strengthening methods for civil & structural assets in oil & gas environment, in line with safety, constructability and continuous modification and revamp of plant facilities to meet production demand.Keywords: assets criticality, credible failure scenario, proactive and preventive maintenance, risk based inspection
Procedia PDF Downloads 405168 The Effectiveness of Multiphase Flow in Well- Control Operations
Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia
Abstract:
Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic
Procedia PDF Downloads 119167 Reconstruction of Alveolar Bone Defects Using Bone Morphogenetic Protein 2 Mediated Rabbit Dental Pulp Stem Cells Seeded on Nano-Hydroxyapatite/Collagen/Poly(L-Lactide)
Authors: Ling-Ling E., Hong-Chen Liu, Dong-Sheng Wang, Fang Su, Xia Wu, Zhan-Ping Shi, Yan Lv, Jia-Zhu Wang
Abstract:
Objective: The objective of the present study is to evaluate the capacity of a tissue-engineered bone complex of recombinant human bone morphogenetic protein 2 (rhBMP-2) mediated dental pulp stem cells (DPSCs) and nano-hydroxyapatite/collagen/poly(L-lactide)(nHAC/PLA) to reconstruct critical-size alveolar bone defects in New Zealand rabbit. Methods: Autologous DPSCs were isolated from rabbit dental pulp tissue and expanded ex vivo to enrich DPSCs numbers, and then their attachment and differentiation capability were evaluated when cultured on the culture plate or nHAC/PLA. The alveolar bone defects were treated with nHAC/PLA, nHAC/PLA+rhBMP-2, nHAC/PLA+DPSCs, nHAC/PLA+DPSCs+rhBMP-2, and autogenous bone (AB) obtained from iliac bone or were left untreated as a control. X-ray and a polychrome sequential fluorescent labeling were performed post-operatively and the animals were sacrificed 12 weeks after operation for histological observation and histomorphometric analysis. Results: Our results showed that DPSCs expressed STRO-1 and vementin, and favoured osteogenesis and adipogenesis in conditioned media. DPSCs attached and spread well, and retained their osteogenic phenotypes on nHAC/PLA. The rhBMP-2 could significantly increase protein content, alkaline phosphatase (ALP) activity/protein, osteocalcin (OCN) content, and mineral formation of DPSCs cultured on nHAC/PLA. The X-ray graph, the fluorescent, histological observation and histomorphometric analysis showed that the nHAC/PLA+DPSCs+rhBMP-2 tissue-engineered bone complex had an earlier mineralization and more bone formation inside the scaffold than nHAC/PLA, nHAC/PLA+rhBMP-2 and nHAC/PLA+DPSCs, or even autologous bone. Implanted DPSCs contribution to new bone were detected through transfected eGFP genes. Conclutions: Our findings indicated that stem cells existed in adult rabbit dental pulp tissue. The rhBMP-2 promoted osteogenic capability of DPSCs as a potential cell source for periodontal bone regeneration. The nHAC/PLA could serve as a good scaffold for autologous DPSCs seeding, proliferation and differentiation. The tissue-engineered bone complex with nHAC/PLA, rhBMP-2, and autologous DPSCs might be a better alternative to autologous bone for the clinical reconstruction of periodontal bone defects.Keywords: nano-hydroxyapatite/collagen/poly (L-lactide), dental pulp stem cell, recombinant human bone morphogenetic protein, bone tissue engineering, alveolar bone
Procedia PDF Downloads 402166 Upward Spread Forced Smoldering Phenomenon: Effects and Applications
Authors: Akshita Swaminathan, Vinayak Malhotra
Abstract:
Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering
Procedia PDF Downloads 145165 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 144164 Eggshell Waste Bioprocessing for Sustainable Acid Phosphatase Production and Minimizing Environmental Hazards
Authors: Soad Abubakr Abdelgalil, Gaber Attia Abo-Zaid, Mohamed Mohamed Yousri Kaddah
Abstract:
Background: The Environmental Protection Agency has listed eggshell waste as the 15th most significant food industry pollution hazard. The utilization of eggshell waste as a source of renewable energy has been a hot topic in recent years. Therefore, finding a sustainable solution for the recycling and valorization of eggshell waste by investigating its potential to produce acid phosphatase (ACP) and organic acids by the newly-discovered B. sonorensis was the target of the current investigation. Results: The most potent ACP-producing B. sonorensis strain ACP2 was identified as a local bacterial strain obtained from the effluent of paper and pulp industries on basis of molecular and morphological characterization. The use of consecutive statistical experimental approaches of Plackett-Burman Design (PBD), and Orthogonal Central Composite Design (OCCD), followed by pH-uncontrolled cultivation conditions in a 7 L bench-top bioreactor, revealed an innovative medium formulation that substantially improved ACP production, reaching 216 U L⁻¹ with ACP yield coefficient Yp/x of 18.2 and a specific growth rate (µ) of 0.1 h⁻¹. The metals Ag+, Sn+, and Cr+ were the most efficiently released from eggshells during the solubilization process by B. sonorensis. The uncontrolled pH culture condition is the most suited and favored setting for improving the ACP and organic acids production simultaneously. Quantitative and qualitative analyses of produced organic acids were carried out using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Lactic acid, citric acid, and hydroxybenzoic acid isomer were the most common organic acids produced throughout the cultivation process. The findings of thermogravimetric analysis (TGA), differential scan calorimeter (DSC), scanning electron microscope (SEM), energy-dispersive spectroscopy (EDS), Fourier-Transform Infrared Spectroscopy (FTIR), and X-Ray Diffraction (XRD) analysis emphasize the significant influence of organic acids and ACP activity on the solubilization of eggshells particles. Conclusions: This study emphasized robust microbial engineering approaches for the large-scale production of a newly discovered acid phosphatase accompanied by organic acids production from B. sonorensis. The biovalorization of the eggshell waste and the production of cost-effective ACP and organic acids were integrated into the current study, and this was done through the implementation of a unique and innovative medium formulation design for eggshell waste management, as well as scaling up ACP production on a bench-top scale.Keywords: chicken eggshells waste, bioremediation, statistical experimental design, batch fermentation
Procedia PDF Downloads 376163 Effect of Fresh Concrete Curing Methods on Its Compressive Strength
Authors: Xianghe Dai, Dennis Lam, Therese Sheehan, Naveed Rehman, Jie Yang
Abstract:
Concrete is one of the most used construction materials that may be made onsite as fresh concrete and then placed in formwork to produce the desired shapes of structures. It has been recognized that the raw materials and mix proportion of concrete dominate the mechanical characteristics of hardened concrete, and the curing method and environment applied to the concrete in early stages of hardening will significantly influence the concrete properties, such as compressive strength, durability, permeability etc. In construction practice, there are various curing methods to maintain the presence of mixing water throughout the early stages of concrete hardening. They are also beneficial to concrete in hot weather conditions as they provide cooling and prevent the evaporation of water. Such methods include ponding or immersion, spraying or fogging, saturated wet covering etc. Also there are various curing methods that may be implemented to decrease the level of water lost which belongs to the concrete surface, such as putting a layer of impervious paper, plastic sheeting or membrane on the concrete to cover it. In the concrete material laboratory, accelerated strength gain methods supply the concrete with heat and additional moisture by applying live steam, coils that are subject to heating or pads that have been warmed electrically. Currently when determining the mechanical parameters of a concrete, the concrete is usually sampled from fresh concrete on site and then cured and tested in laboratories where standardized curing procedures are adopted. However, in engineering practice, curing procedures in the construction sites after the placing of concrete might be very different from the laboratory criteria, and this includes some standard curing procedures adopted in the laboratory that can’t be applied on site. Sometimes the contractor compromises the curing methods in order to reduce construction costs etc. Obviously the difference between curing procedures adopted in the laboratory and those used on construction sites might over- or under-estimate the real concrete quality. This paper presents the effect of three typical curing methods (air curing, water immersion curing, plastic film curing) and of maintaining concrete in steel moulds on the compressive strength development of normal concrete. In this study, Portland cement with 30% fly ash was used and different curing periods, 7 days, 28 days and 60 days were applied. It was found that the highest compressive strength was observed from concrete samples to which 7-day water immersion curing was applied and from samples maintained in steel moulds up to the testing date. The research results implied that concrete used as infill in steel tubular members might develop a higher strength than predicted by design assumptions based on air curing methods. Wrapping concrete with plastic film as a curing method might delay the concrete strength development in the early stages. Water immersion curing for 7 days might significantly increase the concrete compressive strength.Keywords: compressive strength, air curing, water immersion curing, plastic film curing, maintaining in steel mould, comparison
Procedia PDF Downloads 294