Search results for: soft tissue engineering applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10464

Search results for: soft tissue engineering applications

204 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining

Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri

Abstract:

In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.

Keywords: educational data mining, Facebook, learning styles, personality traits

Procedia PDF Downloads 201
203 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 111
202 Personality Characteristics Managerial Skills and Career Preference

Authors: Dinesh Kumar Srivastava

Abstract:

After liberalization of the economy, technical education has seen rapid growth in India. A large number of institutions are offering various engineering and management programmes. Every year, a number of students complete B. Tech/M. Tech and MBA programmes of different institutes, universities in India and search for jobs in the industry. A large number of companies visit educational institutes for campus placements. These companies are interested in hiring competent managers. Most students show preference for jobs from reputed companies and jobs having high compensation. In this context, this study was conducted to understand career preference of postgraduate students and junior executives. Personality characteristics influence work life as well as personal life. In the last two decades, five factor model of personality has been found to be a valid predictor of job performance and job satisfaction. This approach has received support from studies conducted in different countries. It includes neuroticism, extraversion, and openness to experience, agreeableness, and conscientiousness. Similarly three social needs, namely, achievement, affiliation and power influence motivation and performance in certain job functions. Both approaches have been considered in the study. The objective of the study was first, to analyse the relationship between personality characteristics and career preference of students and executives. Secondly, the study analysed the relationship between personality characteristics and skills of students. Three managerial skills namely, conceptual, human and technical have been considered in the study. The sample size of the study was 266 including postgraduate students and junior executives. Respondents have completed BE/B. Tech/MBA programme. Three dimensions of career preference namely, identity, variety and security and three managerial skills were considered as dependent variables. The results indicated that neuroticism was not related to any dimension of career preference. Extraversion was not related to identity, variety and security. It was positively related to three skills. Openness to experience was positively related to skills. Conscientiousness was positively related to variety. It was positively related to three skills. Similarly, the relationship between social needs and career preference was examined using correlation. The results indicated that need for achievement was positively related to variety, identity and security. Need for achievement was positively related to managerial skills Need for affiliation was positively related to three dimensions of career preference as well as managerial skills Need for power was positively related to three dimensions of career preference and managerial skills Social needs appear to be stronger predictor of career preference and managerial skills than big five traits. Findings have implications for selection process in industry.

Keywords: big five traits, career preference, personality, social needs

Procedia PDF Downloads 254
201 Increment of Panel Flutter Margin Using Adaptive Stiffeners

Authors: S. Raja, K. M. Parammasivam, V. Aghilesh

Abstract:

Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.

Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape

Procedia PDF Downloads 270
200 An Investigation into Enablers and Barriers of Reverse Technology Transfer

Authors: Nirmal Kundu, Chandan Bhar, Visveswaran Pandurangan

Abstract:

Technology is the most valued possession for a country or an organization. The economic development depends not on stock of technology but on the capabilities how the technology is being exploited. The technology transfer is the best way how the developing countries have an access to state-of- the-art technology. Traditional technology transfer is a unidirectional phenomenon where technology is transferred from developed to developing countries. But now there is a change of wind. There is a general agreement that global shift of economic power is under way from west to east. As China and India are making the transition from users to producers, and producers to innovators, this has increasing important implications on economy, technology and policy of global trade. As a result, Reverse technology transfer has become a phenomenon and field of study in technology management. The term “Reverse Technology Transfer” is not well defined. Initially the concept of Reverse technology transfer was associated with the phenomenon of “Brain drain” from developing to developed countries. In the second phase, Reverse Technology Transfer was associated with the transfer of knowledge and technology from subsidiaries to multinationals. Finally, time has come now to extend the concept of reverse technology transfer to two different organizations or countries related or unrelated by traditional technology transfer but the transfer or has essentially received the technology through traditional mode of technology transfer. The objective of this paper is to study; 1) the present status of Reverse technology transfer, 2) the factors which are the enablers and barriers of Reverse technology transfer and 3) how the reverse technology transfer strategy can be integrated in the technology policy of a country which will give the countries an economic boost. The research methodology used in this study is a combination of literature review, case studies and key informant interviews. The literature review includes both published as well as unpublished sources of literature. In case study, attempt has been made to study the records of reverse technology transfer that have been occurred in developing countries. In case of key informant interviews, informal telephonic discussions have been carried out with the key executives of the organizations (industry, university and research institutions) who are actively engaged in the process of technology transfer- traditional as well as reverse. Reverse technology transfer is possible only by creating technological capabilities. Following four important enablers coupled with government active and aggressive action can help to build technology base to reach to the goal of Reverse technology transfer 1) Imitation to innovation, 2) Reverse engineering, 3) Collaborative R & D approach, and 4) Preventing reverse brain drain. The barriers that come in the way are the mindset of over dependence, over subordination and parent–child attitude (not adult attitude). Exploitation of these enablers and overcoming the barriers of reverse technology transfer, the developing countries like India and China can prove that going “reverse” is the best way to move forward and again establish themselves as leader of the future world.

Keywords: barriers of reverse technology transfer, enablers of reverse technology transfer, knowledge transfer, reverse technology transfer, technology transfer

Procedia PDF Downloads 376
199 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System

Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu

Abstract:

Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.

Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress

Procedia PDF Downloads 165
198 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose

Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng

Abstract:

Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.

Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model

Procedia PDF Downloads 330
197 Safety Tolerance Zone for Driver-Vehicle-Environment Interactions under Challenging Conditions

Authors: Matjaž Šraml, Marko Renčelj, Tomaž Tollazzi, Chiara Gruden

Abstract:

Road safety is a worldwide issue with numerous and heterogeneous factors influencing it. On the side, driver state – comprising distraction/inattention, fatigue, drowsiness, extreme emotions, and socio-cultural factors highly affect road safety. On the other side, the vehicle state has an important role in mitigating (or not) the road risk. Finally, the road environment is still one of the main determinants of road safety, defining driving task complexity. At the same time, thanks to technological development, a lot of detailed data is easily available, creating opportunities for the detection of driver state, vehicle characteristics and road conditions and, consequently, for the design of ad hoc interventions aimed at improving driver performance, increase awareness and mitigate road risks. This is the challenge faced by the i-DREAMS project. i-DREAMS, which stands for a smart Driver and Road Environment Assessment and Monitoring System, is a 3-year project funded by the European Union’s Horizon 2020 research and innovation program. It aims to set up a platform to define, develop, test and validate a ‘Safety Tolerance Zone’ to prevent drivers from getting too close to the boundaries of unsafe operation by mitigating risks in real-time and after the trip. After the definition and development of the Safety Tolerance Zone concept and the concretization of the same in an Advanced driver-assistance system (ADAS) platform, the system was tested firstly for 2 months in a driving simulator environment in 5 different countries. After that, naturalistic driving studies started for a 10-month period (comprising a 1-month pilot study, 3-month baseline study and 6 months study implementing interventions). Currently, the project team has approved a common evaluation approach, and it is developing the assessment of the usage and outcomes of the i-DREAMS system, which is turning positive insights. The i-DREAMS consortium consists of 13 partners, 7 engineering universities and research groups, 4 industry partners and 2 partners (European Transport Safety Council - ETSC - and POLIS cities and regions for transport innovation) closely linked to transport safety stakeholders, covering 8 different countries altogether.

Keywords: advanced driver assistant systems, driving simulator, safety tolerance zone, traffic safety

Procedia PDF Downloads 43
196 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 111
195 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 105
194 The Influence of Absorptive Capacity on Process Innovation: An Exploratory Study in Seven Leading and Emerging Countries

Authors: Raphael M. Rettig, Tessa C. Flatten

Abstract:

This empirical study answer calls for research on Absorptive Capacity and Process Innovation. Due to the fourth industrial revolution, manufacturing companies face the biggest disruption of their production processes since the rise of advanced manufacturing technologies in the last century. Therefore, process innovation will become a critical task to master in the future for many manufacturing firms around the world. The general ability of organizations to acquire, assimilate, transform, and exploit external knowledge, known as Absorptive Capacity, was proven to positively influence product innovation and is already conceptually associated with process innovation. The presented research provides empirical evidence for this influence. The findings are based on an empirical analysis of 732 companies from seven leading and emerging countries: Brazil, China, France, Germany, India, Japan, and the United States of America. The answers to the survey were collected in February and March 2018 and addressed senior- and top-level management with a focus on operations departments. The statistical analysis reveals the positive influence of potential and Realized Absorptive Capacity on successful process innovation taking the implementation of new digital manufacturing processes as an example. Potential Absorptive Capacity covering the acquisition and assimilation capabilities of an organization showed a significant positive influence (β = .304, p < .05) on digital manufacturing implementation success and therefore on process innovation. Realized Absorptive Capacity proved to have significant positive influence on process innovation as well (β = .461, p < .01). The presented study builds on prior conceptual work in the field of Absorptive Capacity and process innovation and contributes theoretically to ongoing research in two dimensions. First, the already conceptually associated influence of Absorptive Capacity on process innovation is backed by empirical evidence in a broad international context. Second, since Absorptive Capacity was measured with a focus on new product development, prior empirical research on Absorptive Capacity was tailored to the research and development departments of organizations. The results of this study highlight the importance of Absorptive Capacity as a capability in mechanical engineering and operations departments of organizations. The findings give managers an indication of the importance of implementing new innovative processes into their production system and fostering the right mindset of employees to identify new external knowledge. Through the ability to transform and exploit external knowledge, own production processes can be innovated successfully and therefore have a positive influence on firm performance and the competitive position of their organizations.

Keywords: absorptive capacity, digital manufacturing, dynamic capabilities, process innovation

Procedia PDF Downloads 124
193 Micromechanism of Ionization Effects on Metal/Gas Mixing Instabilty at Extreme Shock Compressing Conditions

Authors: Shenghong Huang, Weirong Wang, Xisheng Luo, Xinzhu Li, Xinwen Zhao

Abstract:

Understanding of material mixing induced by Richtmyer-Meshkov instability (RMI) at extreme shock compressing conditions (high energy density environment: P >> 100GPa, T >> 10000k) is of great significance in engineering and science, such as inertial confinement fusion(ICF), supersonic combustion, etc. Turbulent mixing induced by RMI is a kind of complex fluid dynamics, which is closely related with hydrodynamic conditions, thermodynamic states, material physical properties such as compressibility, strength, surface tension and viscosity, etc. as well as initial perturbation on interface. For phenomena in ordinary thermodynamic conditions (low energy density environment), many investigations have been conducted and many progresses have been reported, while for mixing in extreme thermodynamic conditions, the evolution may be very different due to ionization as well as large difference of material physical properties, which is full of scientific problems and academic interests. In this investigation, the first principle based molecular dynamic method is applied to study metal Lithium and gas Hydrogen (Li-H2) interface mixing in micro/meso scale regime at different shock compressing loading speed ranging from 3 km/s to 30 km/s. It's found that, 1) Different from low-speed shock compressing cases, in high-speed shock compresing (>9km/s) cases, a strong acceleration of metal/gas interface after strong shock compression is observed numerically, leading to a strong phase inverse and spike growing with a relative larger linear rate. And more specially, the spike growing rate is observed to be increased with shock loading speed, presenting large discrepancy with available empirical RMI models; 2) Ionization is happened in shock font zone at high-speed loading cases(>9km/s). An additional local electric field induced by the inhomogeneous diffusion of electrons and nuclei after shock font is observed to occur near the metal/gas interface, leading to a large acceleration of nuclei in this zone; 3) In conclusion, the work of additional electric field contributes to a mechanism of RMI in micro/meso scale regime at extreme shock compressing conditions, i.e., a Rayleigh-Taylor instability(RTI) is induced by additional electric field during RMI mixing process and thus a larger linear growing rate of interface spike.

Keywords: ionization, micro/meso scale, material mixing, shock

Procedia PDF Downloads 203
192 Application of Thermoplastic Microbioreactor to the Single Cell Study of Budding Yeast to Decipher the Effect of 5-Hydroxymethylfurfural on Growth

Authors: Elif Gencturk, Ekin Yurdakul, Ahmet Y. Celik, Senol Mutlu, Kutlu O. Ulgen

Abstract:

Yeast cells are generally used as a model system of eukaryotes due to their complex genetic structure, rapid growth ability in optimum conditions, easy replication and well-defined genetic system properties. Thus, yeast cells increased the knowledge of the principal pathways in humans. During fermentation, carbohydrates (hexoses and pentoses) degrade into some toxic by-products such as 5-hydroxymethylfurfural (5-HMF or HMF) and furfural. HMF influences the ethanol yield, and ethanol productivity; it interferes with microbial growth and is considered as a potent inhibitor of bioethanol production. In this study, yeast single cell behavior under HMF application was monitored by using a continuous flow single phase microfluidic platform. Microfluidic device in operation is fabricated by hot embossing and thermo-compression techniques from cyclo-olefin polymer (COP). COP is biocompatible, transparent and rigid material and it is suitable for observing fluorescence of cells considering its low auto-fluorescence characteristic. The response of yeast cells was recorded through Red Fluorescent Protein (RFP) tagged Nop56 gene product, which is an essential evolutionary-conserved nucleolar protein, and also a member of the box C/D snoRNP complexes. With the application of HMF, yeast cell proliferation continued but HMF slowed down the cell growth, and after HMF treatment the cell proliferation stopped. By the addition of fresh nutrient medium, the yeast cells recovered after 6 hours of HMF exposure. Thus, HMF application suppresses normal functioning of cell cycle but it does not cause cells to die. The monitoring of Nop56 expression phases of the individual cells shed light on the protein and ribosome synthesis cycles along with their link to growth. Further computational study revealed that the mechanisms underlying the inhibitory or inductive effects of HMF on growth are enriched in functional categories of protein degradation, protein processing, DNA repair and multidrug resistance. The present microfluidic device can successfully be used for studying the effects of inhibitory agents on growth by single cell tracking, thus capturing cell to cell variations. By metabolic engineering techniques, engineered strains can be developed, and the metabolic network of the microorganism can thus be manipulated such that chemical overproduction of target metabolite is achieved along with the maximum growth/biomass yield.  

Keywords: COP, HMF, ribosome biogenesis, thermoplastic microbioreactor, yeast

Procedia PDF Downloads 139
191 Study of Interplanetary Transfer Trajectories via Vicinity of Libration Points

Authors: Zhe Xu, Jian Li, Lvping Li, Zezheng Dong

Abstract:

This work is to study an optimized transfer strategy of connecting Earth and Mars via the vicinity of libration points, which have been playing an increasingly important role in trajectory designing on a deep space mission, and can be used as an effective alternative solution for Earth-Mars direct transfer mission in some unusual cases. The use of vicinity of libration points of the sun-planet body system is becoming potential gateways for future interplanetary transfer missions. By adding fuel to cargo spaceships located in spaceports, the interplanetary round-trip exploration shuttle mission of such a system facility can also be a reusable transportation system. In addition, in some cases, when the S/C cruising through invariant manifolds, it can also save a large amount of fuel. Therefore, it is necessary to make an effort on looking for efficient transfer strategies using variant manifold about libration points. It was found that Earth L1/L2 Halo/Lyapunov orbits and Mars L2/L1 Halo/Lyapunov orbits could be connected with reasonable fuel consumption and flight duration with appropriate design. In the paper, the halo hopping method and coplanar circular method are briefly introduced. The former used differential corrections to systematically generate low ΔV transfer trajectories between interplanetary manifolds, while the latter discussed escape and capture trajectories to and from Halo orbits by using impulsive maneuvers at periapsis of the manifolds about libration points. In the following, designs of transfer strategies of the two methods are shown here. A comparative performance analysis of interplanetary transfer strategies of the two methods is carried out accordingly. Comparison of strategies is based on two main criteria: the total fuel consumption required to perform the transfer and the time of flight, as mentioned above. The numeric results showed that the coplanar circular method procedure has certain advantages in cost or duration. Finally, optimized transfer strategy with engineering constraints is searched out and examined to be an effective alternative solution for a given direct transfer mission. This paper investigated main methods and gave out an optimized solution in interplanetary transfer via the vicinity of libration points. Although most of Earth-Mars mission planners prefer to build up a direct transfer strategy for the mission due to its advantage in relatively short time of flight, the strategies given in the paper could still be regard as effective alternative solutions since the advantages mentioned above and longer departure window than direct transfer.

Keywords: circular restricted three-body problem, halo/Lyapunov orbit, invariant manifolds, libration points

Procedia PDF Downloads 219
190 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 379
189 Concrete Compressive Strengths of Major Existing Buildings in Kuwait

Authors: Zafer Sakka, Husain Al-Khaiat

Abstract:

Due to social and economic considerations, owners all over the world desire to keep and use existing structures, including aging ones. However, these structures, especially those that are dear, need accurate condition assessment, and proper safety evaluation. More than half of the budget spent on construction activities in developed countries is related to the repair and maintenance of these reinforced concrete (R/C) structures. Also, periodical evaluation and assessment of relatively old concrete structures are vital and imperative. If the evaluation and assessment of structural components of a particular aging R/C structure reveal that repairs are essential for these components, these repairs should not be delayed. Delaying the repairs has the potential of losing serviceability of the whole structure and/or causing total failure and collapse of the structure. In addition, if repairs are delayed, the cost of maintenance will skyrocket as well. It can also be concluded from the above that the assessment of existing needs to receive more consideration and thought from the structural engineering societies and professionals. Ten major existing structures in Kuwait city that were constructed in the 1970s were assessed for structural reliability and integrity. Numerous concrete samples were extracted from the structural systems of the investigated buildings. This paper presents the results of the compressive strength tests that were conducted on the extracted cores. The results are compared for the buildings’ columns and beams elements and compared with the design strengths. The collected data were statistically analyzed. The average compressive strengths of the concrete cores that were extracted from the ten buildings had a large variation. The lowest average compressive strength for one of the buildings was 158 kg/cm². This building was deemed unsafe and economically unfeasible to be repaired; accordingly, it was demolished. The other buildings had an average compressive strengths fall in the range 215-317 kg/cm². Poor construction practices were the main cause for the strengths. Although most of the drawings and information for these buildings were lost during the invasion of Kuwait in 1990, however, information gathered indicated that the design strengths of the beams and columns for most of these buildings were in the range of 280-400 kg/cm². Following the study, measures were taken to rehabilitate the buildings for safety. The mean compressive strength for all cores taken from beams and columns of the ten buildings was 256.7 kg/cm². The values range was 139 to 394 kg/cm². For columns, the mean was 250.4 kg/cm², and the values ranged from 137 to 394 kg/cm². However, the mean compressive strength for the beams was higher than that of columns. It was 285.9 kg/cm², and the range was 181 to 383 kg/cm². In addition to the concrete cores that were extracted from the ten buildings, the 28-day compressive strengths of more than 24,660 concrete cubes were collected from a major ready-mixed concrete supplier in Kuwait. The data represented four different grades of ready-mix concrete (250, 300, 350, and 400 kg/cm²) manufactured between the year 2003 and 2018. The average concrete compressive strength for the different concrete grades (250, 300, 350 and 400 kg/cm²) was found to be 318, 382, 453 and 504 kg/cm², respectively, and the coefficients of variations were found to be 0.138, 0.140, 0.157 and 0.131, respectively.

Keywords: concrete compressive strength, concrete structures, existing building, statistical analysis.

Procedia PDF Downloads 99
188 Developing Gifted Students’ STEM Career Interest

Authors: Wing Mui Winnie So, Tian Luo, Zeyu Han

Abstract:

To fully explore and develop the potentials of gifted students systematically and strategically by providing them with opportunities to receive education at appropriate levels, schools in Hong Kong are encouraged to adopt the "Three-Tier Implementation Model" to plan and implement the school-based gifted education, with Level Three refers to the provision of learning opportunities for the exceptionally gifted students in the form of specialist training outside the school setting by post-secondary institutions, non-government organisations, professional bodies and technology enterprises. Due to the growing concern worldwide about low interest among students in pursuing STEM (Science, Technology, Engineering, and Mathematics) careers, cultivating and boosting STEM career interest has been an emerging research focus worldwide. Although numerous studies have explored its critical contributors, little research has examined the effectiveness of comprehensive interventions such as “Studying with STEM professional”. This study aims to examine the effect on gifted students’ career interest during their participation in an off-school support programme designed and supervised by a team of STEM educators and STEM professionals from a university. Gifted students were provided opportunities and tasks to experience STEM career topics that are not included in the school syllabus, and to experience how to think and work like a STEM professional in their learning. Participants involved 40 primary school students joining the intervention programme outside the normal school setting. Research methods included adopting the STEM career interest survey and drawing tasks supplemented with writing before and after the programme, as well as interviews before the end of the programme. The semi-structured interviews focused on students’ views regarding STEM professionals; what’s it like to learn with a STEM professional; what’s it like to work and think like a STEM professional; and students’ STEM identity and career interest. The changes in gifted students’ STEM career interest and its well-recognised significant contributors, for example, STEM stereotypes, self-efficacy for STEM activities, and STEM outcome expectation, were collectively examined from the pre- and post-survey using T-test. Thematic analysis was conducted for the interview records to explore how studying with STEM professional intervention can help students understand STEM careers; build STEM identity; as well as how to think and work like a STEM professional. Results indicated a significant difference in STEM career interest before and after the intervention. The influencing mechanism was also identified from the measurement of the related contributors and the analysis of drawings and interviews. The potential of off-school support programme supervised by STEM educators and professionals to develop gifted students’ STEM career interest is argued to be further unleashed in future research and practice.

Keywords: gifted students, STEM career, STEM education, STEM professionals

Procedia PDF Downloads 51
187 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 49
186 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method

Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren

Abstract:

In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.

Keywords: floating body, fluid structure interaction, MPS, particle method, waves

Procedia PDF Downloads 49
185 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology

Authors: Tatsuhiko Aizawa, Hiroshi Morita

Abstract:

The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.

Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch

Procedia PDF Downloads 68
184 A Cross-Sectional Study of Knowledge and Attitudes among College Students in a South Indian City about Intimate Partner Violence

Authors: Krithika Lakshmi Sathiya Moorthy

Abstract:

Introduction: Young people’s attitude towards Intimate partner violence (IPV) is likely to influence their indulgence in or abstinence from IPV in future. We aimed to assess the knowledge and attitudes of college students in a south Indian city regarding IPV, its associated factors and redressal mechanisms. Methods: A convenient sample of 247 students, pursuing medicine and engineering, participated in this analytical cross sectional study. They responded to a self-administered questionnaire developed and pretested for this study. The questionnaire comprises statements from a third person’s perspective and vignettes to reduce social desirability bias. Clearance was obtained from the Institute Ethical Committee of Velammal Medical College Hospital and Research Institute, Madurai, India. Data were entered in Epidata Entry v3.1, Odense, Denmark and analysed using SPSS v20.0. Results: Among 247 students, 116 (47%) were males and 59 (24.9%) hailed from rural areas. About 18% (43) of students believed that IPV was a problem only among females. Almost half of the students had witnessed IPV; at home between their parents (9.7%), other family members (13.4%), in their neighbourhood (13%) or public places (15%). Only 118 (47.8%) were aware that a law was in place in India to address IPV. The perceived risk factors for IPV were alcoholic spouse (78.9%), low income families (53.8%), personality traits (52.2%) and dowry system (51%). A sizeable number of students (38.4%) believed that some amount of physical violence was allowable in a marital relationship while 57.6% even considered IPV as an expression of love. Males as compared to females were more in agreement with negative gender stereotypes such as husband can– ‘threaten wife to ensure welfare of family’ (55% vs. 34%, p < 0.001), ‘spy on wife to check fidelity’ (41% vs. 27%, p < 0.001), ‘financially deprive housewife to punish’ (13% vs. 3.8%, p=0.001) and agreed with the statement that it is ‘duty of wife to comply with demands for sex from the husband’ (9.5% vs 4.6%, p=0.3). About 32% males and 25.6% females foresaw themselves as perpetrators of IPV in future. Conclusion: Knowledge about IPV and the associated risk factors among the study population was satisfactory. However, there was widespread acceptance of negative societal gender stereotypes, more so among males and some degrees of IPV were acceptable between married couples. The study advocates the need to halt the propagation of negative gender stereotypes in the impressionable young minds and the necessity to spread the awareness that no degree of IPV is acceptable. This knowledge is also required to plan the content and choose the appropriate media to effectively communicate the awareness about IPV among young persons.

Keywords: attitude, India, intimate partner violence, knowledge, students

Procedia PDF Downloads 199
183 Learning Instructional Managements between the Problem-Based Learning and Stem Education Methods for Enhancing Students Learning Achievements and their Science Attitudes toward Physics the 12th Grade Level

Authors: Achirawatt Tungsombatsanti, Toansakul Santiboon, Kamon Ponkham

Abstract:

Strategies of the STEM education was aimed to prepare of an interdisciplinary and applied approach for the instructional of science, technology, engineering, and mathematics in an integrated students for enhancing engagement of their science skills to the Problem-Based Learning (PBL) method in Borabu School with a sample consists of 80 students in 2 classes at the 12th grade level of their learning achievements on electromagnetic issue. Research administrations were to separate on two different instructional model groups, the 40-experimental group was designed with the STEM instructional experimenting preparation and induction in a 40-student class and the controlling group using the PBL was designed to students identify what they already know, what they need to know, and how and where to access new information that may lead to the resolution of the problem in other class. The learning environment perceptions were obtained using the 35-item Physics Laboratory Environment Inventory (PLEI). Students’ creating attitude skills’ sustainable development toward physics were assessed with the Test Of Physics-Related Attitude (TOPRA) The term scaling was applied to the attempts to measure the attitude objectively with the TOPRA was used to assess students’ perceptions of their science attitude toward physics. Comparisons between pretest and posttest techniques were assessed students’ learning achievements on each their outcomes from each instructional model, differently. The results of these findings revealed that the efficiency of the PLB and the STEM based on criteria indicate that are higher than the standard level of the 80/80. Statistically, significant of students’ learning achievements to their later outcomes on the controlling and experimental physics class groups with the PLB and the STEM instructional designs were differentiated between groups at the .05 level, evidently. Comparisons between the averages mean scores of students’ responses to their instructional activities in the STEM education method are higher than the average mean scores of the PLB model. Associations between students’ perceptions of their physics classes to their attitudes toward physics, the predictive efficiency R2 values indicate that 77%, and 83% of the variances in students’ attitudes for the PLEI and the TOPRA in physics environment classes were attributable to their perceptions of their physics PLB and the STEM instructional design classes, consequently. An important of these findings was contributed to student understanding of scientific concepts, attitudes, and skills as evidence with STEM instructional ought to higher responding than PBL educational teaching. Statistically significant between students’ learning achievements were differentiated of pre and post assessments which overall on two instructional models.

Keywords: learning instructional managements, problem-based learning, STEM education, method, enhancement, students learning achievements, science attitude, physics classes

Procedia PDF Downloads 208
182 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage

Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik

Abstract:

The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.

Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis

Procedia PDF Downloads 297
181 Integrated Management System Applied in Dismantling and Waste Management of the Primary Cooling System from the VVR-S Nuclear Reactor Magurele, Bucharest

Authors: Radu Deju, Carmen Mustata

Abstract:

The VVR-S nuclear research reactor owned by Horia Hubulei National Institute of Physics and Nuclear Engineering (IFIN-HH) was designed for research and radioisotope production, being permanently shut-down in 2002, after 40 years of operation. All amount of the nuclear spent fuel S-36 and EK-10 type was returned to Russian Federation (first in 2009 and last in 2012), and the radioactive waste resulted from the reprocessing of it will remain permanently in the Russian Federation. The decommissioning strategy chosen is immediate dismantling. At this moment, the radionuclides with half-life shorter than 1 year have a minor contribution to the contamination of materials and equipment used in reactor department. The decommissioning of the reactor has started in 2010 and is planned to be finalized in 2020, being the first nuclear research reactor that has started the decommissioning project from the South-East of Europe. The management system applied in the decommissioning of the VVR-S research reactor integrates all common elements of management: nuclear safety, occupational health and safety, environment, quality- compliance with the requirements for decommissioning activities, physical protection and economic elements. This paper presents the application of integrated management system in decommissioning of systems, structures, equipment and components (SSEC) from pumps room, including the management of the resulted radioactive waste. The primary cooling system of this type of reactor includes circulation pumps, heat exchangers, degasser, filter ion exchangers, piping connection, drainage system and radioactive leaks. All the decommissioning activities of primary circuit were performed in stage 2 (year 2014), and they were developed and recorded according to the applicable documents, within the requirements of the Regulatory Body Licenses. In the presentation there will be emphasized how the integrated management system provisions are applied in the dismantling of the primary cooling system, for elaboration, approval, application of necessary documentation, records keeping before, during and after the dismantling activities. Radiation protection and economics are the key factors for the selection of the proper technology. Dedicated and advanced technologies were chosen to perform specific tasks. Safety aspects have been taken into consideration. Resource constraints have also been an important issue considered in defining the decommissioning strategy. Important aspects like radiological monitoring of the personnel and areas, decontamination, waste management and final characterization of the released site are demonstrated and documented.

Keywords: decommissioning, integrated management system, nuclear reactor, waste management

Procedia PDF Downloads 274
180 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 48
179 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring

Procedia PDF Downloads 208
178 The Development of Traffic Devices Using Natural Rubber in Thailand

Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern

Abstract:

Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.

Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware

Procedia PDF Downloads 100
177 The Advancement of Smart Cushion Product and System Design Enhancing Public Health and Well-Being at Workplace

Authors: Dosun Shin, Assegid Kidane, Pavan Turaga

Abstract:

According to the National Institute of Health, living a sedentary lifestyle leads to a number of health issues, including increased risk of cardiovascular dis-ease, type 2 diabetes, obesity, and certain types of cancers. This project brings together experts in multiple disciplines to bring product design, sensor design, algorithms, and health intervention studies to develop a product and system that helps reduce the amount of time sitting at the workplace. This paper illustrates ongoing improvements to prototypes the research team developed in initial research; including working prototypes with a software application, which were developed and demonstrated for users. Additional modifications were made to improve functionality, aesthetics, and ease of use, which will be discussed in this paper. Extending on the foundations created in the initial phase, our approach sought to further improve the product by conducting additional human factor research, studying deficiencies in competitive products, testing various materials/forms, developing working prototypes, and obtaining feedback from additional potential users. The solution consisted of an aesthetically pleasing seat cover cushion that easily attaches to common office chairs found in most workplaces, ensuring a wide variety of people can use the product. The product discreetly contains sensors that track when the user sits on their chair, sending information to a phone app that triggers reminders for users to stand up and move around after sitting for a set amount of time. This paper also presents the analyzed typical office aesthetics and selected materials, colors, and forms that complimented the working environment. Comfort and ease of use remained a high priority as the design team sought to provide a product and system that integrated into the workplace. As the research team continues to test, improve, and implement this solution for the sedentary workplace, the team seeks to create a viable product that acts as an impetus for a more active workday and lifestyle, further decreasing the proliferation of chronic disease and health issues for sedentary working people. This paper illustrates in detail the processes of engineering, product design, methodology, and testing results.

Keywords: anti-sedentary work behavior, new product development, sensor design, health intervention studies

Procedia PDF Downloads 130
176 Ethicality of Algorithmic Pricing and Consumers’ Resistance

Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos

Abstract:

Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.

Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality

Procedia PDF Downloads 66
175 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector

Authors: Rawan K. Al-Duaij, Salem A. Al-Salem

Abstract:

Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.

Keywords: adjustment orders, data analysis, oil and gas sector, systems management

Procedia PDF Downloads 136