Search results for: earth resources engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8972

Search results for: earth resources engineering

272 Increment of Panel Flutter Margin Using Adaptive Stiffeners

Authors: S. Raja, K. M. Parammasivam, V. Aghilesh

Abstract:

Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.

Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape

Procedia PDF Downloads 294
271 An Investigation into Enablers and Barriers of Reverse Technology Transfer

Authors: Nirmal Kundu, Chandan Bhar, Visveswaran Pandurangan

Abstract:

Technology is the most valued possession for a country or an organization. The economic development depends not on stock of technology but on the capabilities how the technology is being exploited. The technology transfer is the best way how the developing countries have an access to state-of- the-art technology. Traditional technology transfer is a unidirectional phenomenon where technology is transferred from developed to developing countries. But now there is a change of wind. There is a general agreement that global shift of economic power is under way from west to east. As China and India are making the transition from users to producers, and producers to innovators, this has increasing important implications on economy, technology and policy of global trade. As a result, Reverse technology transfer has become a phenomenon and field of study in technology management. The term “Reverse Technology Transfer” is not well defined. Initially the concept of Reverse technology transfer was associated with the phenomenon of “Brain drain” from developing to developed countries. In the second phase, Reverse Technology Transfer was associated with the transfer of knowledge and technology from subsidiaries to multinationals. Finally, time has come now to extend the concept of reverse technology transfer to two different organizations or countries related or unrelated by traditional technology transfer but the transfer or has essentially received the technology through traditional mode of technology transfer. The objective of this paper is to study; 1) the present status of Reverse technology transfer, 2) the factors which are the enablers and barriers of Reverse technology transfer and 3) how the reverse technology transfer strategy can be integrated in the technology policy of a country which will give the countries an economic boost. The research methodology used in this study is a combination of literature review, case studies and key informant interviews. The literature review includes both published as well as unpublished sources of literature. In case study, attempt has been made to study the records of reverse technology transfer that have been occurred in developing countries. In case of key informant interviews, informal telephonic discussions have been carried out with the key executives of the organizations (industry, university and research institutions) who are actively engaged in the process of technology transfer- traditional as well as reverse. Reverse technology transfer is possible only by creating technological capabilities. Following four important enablers coupled with government active and aggressive action can help to build technology base to reach to the goal of Reverse technology transfer 1) Imitation to innovation, 2) Reverse engineering, 3) Collaborative R & D approach, and 4) Preventing reverse brain drain. The barriers that come in the way are the mindset of over dependence, over subordination and parent–child attitude (not adult attitude). Exploitation of these enablers and overcoming the barriers of reverse technology transfer, the developing countries like India and China can prove that going “reverse” is the best way to move forward and again establish themselves as leader of the future world.

Keywords: barriers of reverse technology transfer, enablers of reverse technology transfer, knowledge transfer, reverse technology transfer, technology transfer

Procedia PDF Downloads 399
270 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose

Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng

Abstract:

Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.

Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model

Procedia PDF Downloads 358
269 Investigation of Polypropylene Composite Films With Carbon Nanotubes and the Role of β Nucleating Agents for the Improvement of Their Water Vapor Permeability

Authors: Glykeria A. Visvini, George N. Mathioudakis, Amaia Soto Beobide, Aris E. Giannakas, George A. Voyiatzis

Abstract:

Polymeric nanocomposites have generated considerable interest in both academic research and industry because their properties can be tailored by adjusting the type & concentration of nano-inclusions, resulting in complementary and adaptable characteristics. The exceptional and/or unique properties of the nanocomposites, including the high mechanical strength and stiffness, the ease of processing, and their lightweight nature, are attributed to the high surface area, the electrical and/or thermal conductivity of the nano-fillers, which make them appealing materials for a wide range of engineering applications. Polymeric «breathable» membranes enabling water vapor permeability (WVP) can be designed either by using micro/nano-fillers with the ability to interrupt the continuity of the polymer phase generating micro/nano-porous structures or/and by creating micro/nano-pores into the composite material by uniaxial/biaxial stretching. Among the nanofillers, carbon nanotubes (CNTs) exhibit particular high WVP and for this reason, they have already been proposed for gas separation membranes. In a similar context, they could prove to be promising alternative/complementary filler nano-materials, for the development of "breathable" products. Polypropylene (PP) is a commonly utilized thermoplastic polymer matrix in the development of composite films, due to its easy processability and low price, combined with its good chemical & physical properties. PP is known to present several crystalline phases (α, β and γ), depending on the applied treatment process, which have a significant impact on its final properties, particularly in terms of WVP. Specifically, the development of the β-phase in PP in combination with stretching is anticipated to modify the crystalline behavior and extend the microporosity of the polymer matrix exhibiting enhanced WVP. The primary objective of this study is to develop breathable nano-carbon based (functionalized MWCNTs) PP composite membranes, potentially also avoiding the stretching process. This proposed alternative is expected to have a better performance/cost ratio over current stretched PP/CaCO3 composite benchmark membranes. The focus is to investigate the impact of both β-nucleator(s) and nano-carbon fillers on water vapor transmission rate properties of relevant PP nanocomposites.

Keywords: carbon nanotubes, nanocomposites, nucleating agents, polypropylene, water vapor permeability

Procedia PDF Downloads 74
268 Safety Tolerance Zone for Driver-Vehicle-Environment Interactions under Challenging Conditions

Authors: Matjaž Šraml, Marko Renčelj, Tomaž Tollazzi, Chiara Gruden

Abstract:

Road safety is a worldwide issue with numerous and heterogeneous factors influencing it. On the side, driver state – comprising distraction/inattention, fatigue, drowsiness, extreme emotions, and socio-cultural factors highly affect road safety. On the other side, the vehicle state has an important role in mitigating (or not) the road risk. Finally, the road environment is still one of the main determinants of road safety, defining driving task complexity. At the same time, thanks to technological development, a lot of detailed data is easily available, creating opportunities for the detection of driver state, vehicle characteristics and road conditions and, consequently, for the design of ad hoc interventions aimed at improving driver performance, increase awareness and mitigate road risks. This is the challenge faced by the i-DREAMS project. i-DREAMS, which stands for a smart Driver and Road Environment Assessment and Monitoring System, is a 3-year project funded by the European Union’s Horizon 2020 research and innovation program. It aims to set up a platform to define, develop, test and validate a ‘Safety Tolerance Zone’ to prevent drivers from getting too close to the boundaries of unsafe operation by mitigating risks in real-time and after the trip. After the definition and development of the Safety Tolerance Zone concept and the concretization of the same in an Advanced driver-assistance system (ADAS) platform, the system was tested firstly for 2 months in a driving simulator environment in 5 different countries. After that, naturalistic driving studies started for a 10-month period (comprising a 1-month pilot study, 3-month baseline study and 6 months study implementing interventions). Currently, the project team has approved a common evaluation approach, and it is developing the assessment of the usage and outcomes of the i-DREAMS system, which is turning positive insights. The i-DREAMS consortium consists of 13 partners, 7 engineering universities and research groups, 4 industry partners and 2 partners (European Transport Safety Council - ETSC - and POLIS cities and regions for transport innovation) closely linked to transport safety stakeholders, covering 8 different countries altogether.

Keywords: advanced driver assistant systems, driving simulator, safety tolerance zone, traffic safety

Procedia PDF Downloads 68
267 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 199
266 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 134
265 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 123
264 The Influence of Absorptive Capacity on Process Innovation: An Exploratory Study in Seven Leading and Emerging Countries

Authors: Raphael M. Rettig, Tessa C. Flatten

Abstract:

This empirical study answer calls for research on Absorptive Capacity and Process Innovation. Due to the fourth industrial revolution, manufacturing companies face the biggest disruption of their production processes since the rise of advanced manufacturing technologies in the last century. Therefore, process innovation will become a critical task to master in the future for many manufacturing firms around the world. The general ability of organizations to acquire, assimilate, transform, and exploit external knowledge, known as Absorptive Capacity, was proven to positively influence product innovation and is already conceptually associated with process innovation. The presented research provides empirical evidence for this influence. The findings are based on an empirical analysis of 732 companies from seven leading and emerging countries: Brazil, China, France, Germany, India, Japan, and the United States of America. The answers to the survey were collected in February and March 2018 and addressed senior- and top-level management with a focus on operations departments. The statistical analysis reveals the positive influence of potential and Realized Absorptive Capacity on successful process innovation taking the implementation of new digital manufacturing processes as an example. Potential Absorptive Capacity covering the acquisition and assimilation capabilities of an organization showed a significant positive influence (β = .304, p < .05) on digital manufacturing implementation success and therefore on process innovation. Realized Absorptive Capacity proved to have significant positive influence on process innovation as well (β = .461, p < .01). The presented study builds on prior conceptual work in the field of Absorptive Capacity and process innovation and contributes theoretically to ongoing research in two dimensions. First, the already conceptually associated influence of Absorptive Capacity on process innovation is backed by empirical evidence in a broad international context. Second, since Absorptive Capacity was measured with a focus on new product development, prior empirical research on Absorptive Capacity was tailored to the research and development departments of organizations. The results of this study highlight the importance of Absorptive Capacity as a capability in mechanical engineering and operations departments of organizations. The findings give managers an indication of the importance of implementing new innovative processes into their production system and fostering the right mindset of employees to identify new external knowledge. Through the ability to transform and exploit external knowledge, own production processes can be innovated successfully and therefore have a positive influence on firm performance and the competitive position of their organizations.

Keywords: absorptive capacity, digital manufacturing, dynamic capabilities, process innovation

Procedia PDF Downloads 144
263 Micromechanism of Ionization Effects on Metal/Gas Mixing Instabilty at Extreme Shock Compressing Conditions

Authors: Shenghong Huang, Weirong Wang, Xisheng Luo, Xinzhu Li, Xinwen Zhao

Abstract:

Understanding of material mixing induced by Richtmyer-Meshkov instability (RMI) at extreme shock compressing conditions (high energy density environment: P >> 100GPa, T >> 10000k) is of great significance in engineering and science, such as inertial confinement fusion(ICF), supersonic combustion, etc. Turbulent mixing induced by RMI is a kind of complex fluid dynamics, which is closely related with hydrodynamic conditions, thermodynamic states, material physical properties such as compressibility, strength, surface tension and viscosity, etc. as well as initial perturbation on interface. For phenomena in ordinary thermodynamic conditions (low energy density environment), many investigations have been conducted and many progresses have been reported, while for mixing in extreme thermodynamic conditions, the evolution may be very different due to ionization as well as large difference of material physical properties, which is full of scientific problems and academic interests. In this investigation, the first principle based molecular dynamic method is applied to study metal Lithium and gas Hydrogen (Li-H2) interface mixing in micro/meso scale regime at different shock compressing loading speed ranging from 3 km/s to 30 km/s. It's found that, 1) Different from low-speed shock compressing cases, in high-speed shock compresing (>9km/s) cases, a strong acceleration of metal/gas interface after strong shock compression is observed numerically, leading to a strong phase inverse and spike growing with a relative larger linear rate. And more specially, the spike growing rate is observed to be increased with shock loading speed, presenting large discrepancy with available empirical RMI models; 2) Ionization is happened in shock font zone at high-speed loading cases(>9km/s). An additional local electric field induced by the inhomogeneous diffusion of electrons and nuclei after shock font is observed to occur near the metal/gas interface, leading to a large acceleration of nuclei in this zone; 3) In conclusion, the work of additional electric field contributes to a mechanism of RMI in micro/meso scale regime at extreme shock compressing conditions, i.e., a Rayleigh-Taylor instability(RTI) is induced by additional electric field during RMI mixing process and thus a larger linear growing rate of interface spike.

Keywords: ionization, micro/meso scale, material mixing, shock

Procedia PDF Downloads 230
262 Application of Thermoplastic Microbioreactor to the Single Cell Study of Budding Yeast to Decipher the Effect of 5-Hydroxymethylfurfural on Growth

Authors: Elif Gencturk, Ekin Yurdakul, Ahmet Y. Celik, Senol Mutlu, Kutlu O. Ulgen

Abstract:

Yeast cells are generally used as a model system of eukaryotes due to their complex genetic structure, rapid growth ability in optimum conditions, easy replication and well-defined genetic system properties. Thus, yeast cells increased the knowledge of the principal pathways in humans. During fermentation, carbohydrates (hexoses and pentoses) degrade into some toxic by-products such as 5-hydroxymethylfurfural (5-HMF or HMF) and furfural. HMF influences the ethanol yield, and ethanol productivity; it interferes with microbial growth and is considered as a potent inhibitor of bioethanol production. In this study, yeast single cell behavior under HMF application was monitored by using a continuous flow single phase microfluidic platform. Microfluidic device in operation is fabricated by hot embossing and thermo-compression techniques from cyclo-olefin polymer (COP). COP is biocompatible, transparent and rigid material and it is suitable for observing fluorescence of cells considering its low auto-fluorescence characteristic. The response of yeast cells was recorded through Red Fluorescent Protein (RFP) tagged Nop56 gene product, which is an essential evolutionary-conserved nucleolar protein, and also a member of the box C/D snoRNP complexes. With the application of HMF, yeast cell proliferation continued but HMF slowed down the cell growth, and after HMF treatment the cell proliferation stopped. By the addition of fresh nutrient medium, the yeast cells recovered after 6 hours of HMF exposure. Thus, HMF application suppresses normal functioning of cell cycle but it does not cause cells to die. The monitoring of Nop56 expression phases of the individual cells shed light on the protein and ribosome synthesis cycles along with their link to growth. Further computational study revealed that the mechanisms underlying the inhibitory or inductive effects of HMF on growth are enriched in functional categories of protein degradation, protein processing, DNA repair and multidrug resistance. The present microfluidic device can successfully be used for studying the effects of inhibitory agents on growth by single cell tracking, thus capturing cell to cell variations. By metabolic engineering techniques, engineered strains can be developed, and the metabolic network of the microorganism can thus be manipulated such that chemical overproduction of target metabolite is achieved along with the maximum growth/biomass yield.  

Keywords: COP, HMF, ribosome biogenesis, thermoplastic microbioreactor, yeast

Procedia PDF Downloads 171
261 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets

Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar

Abstract:

Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).

Keywords: coupled effect, heat transfer, sink, solid rocket motors, source

Procedia PDF Downloads 223
260 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 401
259 Consumption and Diffusion Based Model of Tissue Organoid Development

Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov

Abstract:

In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.

Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid

Procedia PDF Downloads 308
258 Concrete Compressive Strengths of Major Existing Buildings in Kuwait

Authors: Zafer Sakka, Husain Al-Khaiat

Abstract:

Due to social and economic considerations, owners all over the world desire to keep and use existing structures, including aging ones. However, these structures, especially those that are dear, need accurate condition assessment, and proper safety evaluation. More than half of the budget spent on construction activities in developed countries is related to the repair and maintenance of these reinforced concrete (R/C) structures. Also, periodical evaluation and assessment of relatively old concrete structures are vital and imperative. If the evaluation and assessment of structural components of a particular aging R/C structure reveal that repairs are essential for these components, these repairs should not be delayed. Delaying the repairs has the potential of losing serviceability of the whole structure and/or causing total failure and collapse of the structure. In addition, if repairs are delayed, the cost of maintenance will skyrocket as well. It can also be concluded from the above that the assessment of existing needs to receive more consideration and thought from the structural engineering societies and professionals. Ten major existing structures in Kuwait city that were constructed in the 1970s were assessed for structural reliability and integrity. Numerous concrete samples were extracted from the structural systems of the investigated buildings. This paper presents the results of the compressive strength tests that were conducted on the extracted cores. The results are compared for the buildings’ columns and beams elements and compared with the design strengths. The collected data were statistically analyzed. The average compressive strengths of the concrete cores that were extracted from the ten buildings had a large variation. The lowest average compressive strength for one of the buildings was 158 kg/cm². This building was deemed unsafe and economically unfeasible to be repaired; accordingly, it was demolished. The other buildings had an average compressive strengths fall in the range 215-317 kg/cm². Poor construction practices were the main cause for the strengths. Although most of the drawings and information for these buildings were lost during the invasion of Kuwait in 1990, however, information gathered indicated that the design strengths of the beams and columns for most of these buildings were in the range of 280-400 kg/cm². Following the study, measures were taken to rehabilitate the buildings for safety. The mean compressive strength for all cores taken from beams and columns of the ten buildings was 256.7 kg/cm². The values range was 139 to 394 kg/cm². For columns, the mean was 250.4 kg/cm², and the values ranged from 137 to 394 kg/cm². However, the mean compressive strength for the beams was higher than that of columns. It was 285.9 kg/cm², and the range was 181 to 383 kg/cm². In addition to the concrete cores that were extracted from the ten buildings, the 28-day compressive strengths of more than 24,660 concrete cubes were collected from a major ready-mixed concrete supplier in Kuwait. The data represented four different grades of ready-mix concrete (250, 300, 350, and 400 kg/cm²) manufactured between the year 2003 and 2018. The average concrete compressive strength for the different concrete grades (250, 300, 350 and 400 kg/cm²) was found to be 318, 382, 453 and 504 kg/cm², respectively, and the coefficients of variations were found to be 0.138, 0.140, 0.157 and 0.131, respectively.

Keywords: concrete compressive strength, concrete structures, existing building, statistical analysis.

Procedia PDF Downloads 116
257 Developing Gifted Students’ STEM Career Interest

Authors: Wing Mui Winnie So, Tian Luo, Zeyu Han

Abstract:

To fully explore and develop the potentials of gifted students systematically and strategically by providing them with opportunities to receive education at appropriate levels, schools in Hong Kong are encouraged to adopt the "Three-Tier Implementation Model" to plan and implement the school-based gifted education, with Level Three refers to the provision of learning opportunities for the exceptionally gifted students in the form of specialist training outside the school setting by post-secondary institutions, non-government organisations, professional bodies and technology enterprises. Due to the growing concern worldwide about low interest among students in pursuing STEM (Science, Technology, Engineering, and Mathematics) careers, cultivating and boosting STEM career interest has been an emerging research focus worldwide. Although numerous studies have explored its critical contributors, little research has examined the effectiveness of comprehensive interventions such as “Studying with STEM professional”. This study aims to examine the effect on gifted students’ career interest during their participation in an off-school support programme designed and supervised by a team of STEM educators and STEM professionals from a university. Gifted students were provided opportunities and tasks to experience STEM career topics that are not included in the school syllabus, and to experience how to think and work like a STEM professional in their learning. Participants involved 40 primary school students joining the intervention programme outside the normal school setting. Research methods included adopting the STEM career interest survey and drawing tasks supplemented with writing before and after the programme, as well as interviews before the end of the programme. The semi-structured interviews focused on students’ views regarding STEM professionals; what’s it like to learn with a STEM professional; what’s it like to work and think like a STEM professional; and students’ STEM identity and career interest. The changes in gifted students’ STEM career interest and its well-recognised significant contributors, for example, STEM stereotypes, self-efficacy for STEM activities, and STEM outcome expectation, were collectively examined from the pre- and post-survey using T-test. Thematic analysis was conducted for the interview records to explore how studying with STEM professional intervention can help students understand STEM careers; build STEM identity; as well as how to think and work like a STEM professional. Results indicated a significant difference in STEM career interest before and after the intervention. The influencing mechanism was also identified from the measurement of the related contributors and the analysis of drawings and interviews. The potential of off-school support programme supervised by STEM educators and professionals to develop gifted students’ STEM career interest is argued to be further unleashed in future research and practice.

Keywords: gifted students, STEM career, STEM education, STEM professionals

Procedia PDF Downloads 76
256 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 109
255 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 74
254 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method

Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren

Abstract:

In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.

Keywords: floating body, fluid structure interaction, MPS, particle method, waves

Procedia PDF Downloads 76
253 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology

Authors: Tatsuhiko Aizawa, Hiroshi Morita

Abstract:

The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.

Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch

Procedia PDF Downloads 88
252 A Cross-Sectional Study of Knowledge and Attitudes among College Students in a South Indian City about Intimate Partner Violence

Authors: Krithika Lakshmi Sathiya Moorthy

Abstract:

Introduction: Young people’s attitude towards Intimate partner violence (IPV) is likely to influence their indulgence in or abstinence from IPV in future. We aimed to assess the knowledge and attitudes of college students in a south Indian city regarding IPV, its associated factors and redressal mechanisms. Methods: A convenient sample of 247 students, pursuing medicine and engineering, participated in this analytical cross sectional study. They responded to a self-administered questionnaire developed and pretested for this study. The questionnaire comprises statements from a third person’s perspective and vignettes to reduce social desirability bias. Clearance was obtained from the Institute Ethical Committee of Velammal Medical College Hospital and Research Institute, Madurai, India. Data were entered in Epidata Entry v3.1, Odense, Denmark and analysed using SPSS v20.0. Results: Among 247 students, 116 (47%) were males and 59 (24.9%) hailed from rural areas. About 18% (43) of students believed that IPV was a problem only among females. Almost half of the students had witnessed IPV; at home between their parents (9.7%), other family members (13.4%), in their neighbourhood (13%) or public places (15%). Only 118 (47.8%) were aware that a law was in place in India to address IPV. The perceived risk factors for IPV were alcoholic spouse (78.9%), low income families (53.8%), personality traits (52.2%) and dowry system (51%). A sizeable number of students (38.4%) believed that some amount of physical violence was allowable in a marital relationship while 57.6% even considered IPV as an expression of love. Males as compared to females were more in agreement with negative gender stereotypes such as husband can– ‘threaten wife to ensure welfare of family’ (55% vs. 34%, p < 0.001), ‘spy on wife to check fidelity’ (41% vs. 27%, p < 0.001), ‘financially deprive housewife to punish’ (13% vs. 3.8%, p=0.001) and agreed with the statement that it is ‘duty of wife to comply with demands for sex from the husband’ (9.5% vs 4.6%, p=0.3). About 32% males and 25.6% females foresaw themselves as perpetrators of IPV in future. Conclusion: Knowledge about IPV and the associated risk factors among the study population was satisfactory. However, there was widespread acceptance of negative societal gender stereotypes, more so among males and some degrees of IPV were acceptable between married couples. The study advocates the need to halt the propagation of negative gender stereotypes in the impressionable young minds and the necessity to spread the awareness that no degree of IPV is acceptable. This knowledge is also required to plan the content and choose the appropriate media to effectively communicate the awareness about IPV among young persons.

Keywords: attitude, India, intimate partner violence, knowledge, students

Procedia PDF Downloads 227
251 Learning Instructional Managements between the Problem-Based Learning and Stem Education Methods for Enhancing Students Learning Achievements and their Science Attitudes toward Physics the 12th Grade Level

Authors: Achirawatt Tungsombatsanti, Toansakul Santiboon, Kamon Ponkham

Abstract:

Strategies of the STEM education was aimed to prepare of an interdisciplinary and applied approach for the instructional of science, technology, engineering, and mathematics in an integrated students for enhancing engagement of their science skills to the Problem-Based Learning (PBL) method in Borabu School with a sample consists of 80 students in 2 classes at the 12th grade level of their learning achievements on electromagnetic issue. Research administrations were to separate on two different instructional model groups, the 40-experimental group was designed with the STEM instructional experimenting preparation and induction in a 40-student class and the controlling group using the PBL was designed to students identify what they already know, what they need to know, and how and where to access new information that may lead to the resolution of the problem in other class. The learning environment perceptions were obtained using the 35-item Physics Laboratory Environment Inventory (PLEI). Students’ creating attitude skills’ sustainable development toward physics were assessed with the Test Of Physics-Related Attitude (TOPRA) The term scaling was applied to the attempts to measure the attitude objectively with the TOPRA was used to assess students’ perceptions of their science attitude toward physics. Comparisons between pretest and posttest techniques were assessed students’ learning achievements on each their outcomes from each instructional model, differently. The results of these findings revealed that the efficiency of the PLB and the STEM based on criteria indicate that are higher than the standard level of the 80/80. Statistically, significant of students’ learning achievements to their later outcomes on the controlling and experimental physics class groups with the PLB and the STEM instructional designs were differentiated between groups at the .05 level, evidently. Comparisons between the averages mean scores of students’ responses to their instructional activities in the STEM education method are higher than the average mean scores of the PLB model. Associations between students’ perceptions of their physics classes to their attitudes toward physics, the predictive efficiency R2 values indicate that 77%, and 83% of the variances in students’ attitudes for the PLEI and the TOPRA in physics environment classes were attributable to their perceptions of their physics PLB and the STEM instructional design classes, consequently. An important of these findings was contributed to student understanding of scientific concepts, attitudes, and skills as evidence with STEM instructional ought to higher responding than PBL educational teaching. Statistically significant between students’ learning achievements were differentiated of pre and post assessments which overall on two instructional models.

Keywords: learning instructional managements, problem-based learning, STEM education, method, enhancement, students learning achievements, science attitude, physics classes

Procedia PDF Downloads 228
250 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage

Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik

Abstract:

The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.

Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis

Procedia PDF Downloads 318
249 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 255
248 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 177
247 Integrated Management System Applied in Dismantling and Waste Management of the Primary Cooling System from the VVR-S Nuclear Reactor Magurele, Bucharest

Authors: Radu Deju, Carmen Mustata

Abstract:

The VVR-S nuclear research reactor owned by Horia Hubulei National Institute of Physics and Nuclear Engineering (IFIN-HH) was designed for research and radioisotope production, being permanently shut-down in 2002, after 40 years of operation. All amount of the nuclear spent fuel S-36 and EK-10 type was returned to Russian Federation (first in 2009 and last in 2012), and the radioactive waste resulted from the reprocessing of it will remain permanently in the Russian Federation. The decommissioning strategy chosen is immediate dismantling. At this moment, the radionuclides with half-life shorter than 1 year have a minor contribution to the contamination of materials and equipment used in reactor department. The decommissioning of the reactor has started in 2010 and is planned to be finalized in 2020, being the first nuclear research reactor that has started the decommissioning project from the South-East of Europe. The management system applied in the decommissioning of the VVR-S research reactor integrates all common elements of management: nuclear safety, occupational health and safety, environment, quality- compliance with the requirements for decommissioning activities, physical protection and economic elements. This paper presents the application of integrated management system in decommissioning of systems, structures, equipment and components (SSEC) from pumps room, including the management of the resulted radioactive waste. The primary cooling system of this type of reactor includes circulation pumps, heat exchangers, degasser, filter ion exchangers, piping connection, drainage system and radioactive leaks. All the decommissioning activities of primary circuit were performed in stage 2 (year 2014), and they were developed and recorded according to the applicable documents, within the requirements of the Regulatory Body Licenses. In the presentation there will be emphasized how the integrated management system provisions are applied in the dismantling of the primary cooling system, for elaboration, approval, application of necessary documentation, records keeping before, during and after the dismantling activities. Radiation protection and economics are the key factors for the selection of the proper technology. Dedicated and advanced technologies were chosen to perform specific tasks. Safety aspects have been taken into consideration. Resource constraints have also been an important issue considered in defining the decommissioning strategy. Important aspects like radiological monitoring of the personnel and areas, decontamination, waste management and final characterization of the released site are demonstrated and documented.

Keywords: decommissioning, integrated management system, nuclear reactor, waste management

Procedia PDF Downloads 289
246 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 112
245 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast

Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi

Abstract:

The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.

Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education

Procedia PDF Downloads 173
244 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 88
243 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring

Procedia PDF Downloads 227