Search results for: real gas model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20328

Search results for: real gas model

18288 An Ontology Model for Systems Engineering Derived from ISO/IEC/IEEE 15288: 2015: Systems and Software Engineering - System Life Cycle Processes

Authors: Lan Yang, Kathryn Cormican, Ming Yu

Abstract:

ISO/IEC/IEEE 15288: 2015, Systems and Software Engineering - System Life Cycle Processes is an international standard that provides generic top-level process descriptions to support systems engineering (SE). However, the processes defined in the standard needs improvement to lift integrity and consistency. The goal of this research is to explore the way by building an ontology model for the SE standard to manage the knowledge of SE. The ontology model gives a whole picture of the SE knowledge domain by building connections between SE concepts. Moreover, it creates a hierarchical classification of the concepts to fulfil different requirements of displaying and analysing SE knowledge.

Keywords: knowledge management, model-based systems engineering, ontology modelling, systems engineering ontology

Procedia PDF Downloads 423
18287 Estimation of Consolidating Settlement Based on a Time-Dependent Skin Friction Model Considering Column Surface Roughness

Authors: Jiang Zhenbo, Ishikura Ryohei, Yasufuku Noriyuki

Abstract:

Improvement of soft clay deposits by the combination of surface stabilization and floating type cement-treated columns is one of the most popular techniques worldwide. On the basis of one dimensional consolidation model, a time-dependent skin friction model for the column-soil interaction is proposed. The nonlinear relationship between column shaft shear stresses and effective vertical pressure of the surrounding soil can be described in this model. The influence of column-soil surface roughness can be represented using a roughness coefficient R, which plays an important role in the design of column length. Based on the homogenization method, a part of floating type improved ground will be treated as an unimproved portion, which with a length of αH1 is defined as a time-dependent equivalent skin friction length. The compression settlement of this unimproved portion can be predicted only using the soft clay parameters. Apart from calculating the settlement of this composited ground, the load transfer mechanism is discussed utilizing model tests. The proposed model is validated by comparing with calculations and laboratory results of model and ring shear tests, which indicate the suitability and accuracy of the solutions in this paper.

Keywords: floating type improved foundation, time-dependent skin friction, roughness, consolidation

Procedia PDF Downloads 463
18286 Economic Loss due to Ganoderma Disease in Oil Palm

Authors: K. Assis, K. P. Chong, A. S. Idris, C. M. Ho

Abstract:

Oil palm or Elaeis guineensis is considered as the golden crop in Malaysia. But oil palm industry in this country is now facing with the most devastating disease called as Ganoderma Basal Stem Rot disease. The objective of this paper is to analyze the economic loss due to this disease. There were three commercial oil palm sites selected for collecting the required data for economic analysis. Yield parameter used to measure the loss was the total weight of fresh fruit bunch in six months. The predictors include disease severity, change in disease severity, number of infected neighbor palms, age of palm, planting generation, topography, and first order interaction variables. The estimation model of yield loss was identified by using backward elimination based regression method. Diagnostic checking was conducted on the residual of the best yield loss model. The value of mean absolute percentage error (MAPE) was used to measure the forecast performance of the model. The best yield loss model was then used to estimate the economic loss by using the current monthly price of fresh fruit bunch at mill gate.

Keywords: ganoderma, oil palm, regression model, yield loss, economic loss

Procedia PDF Downloads 381
18285 Enhancements to the Coupled Hydro-Mechanical Hypoplastic Model for Unsaturated Soils

Authors: Shanujah Mathuranayagam, William Fuentes, Samanthika Liyanapathirana

Abstract:

This paper introduces an enhanced version of the coupled hydro-mechanical hypoplastic model. The model is able to simulate volumetric collapse upon wetting and incorporates suction effects on stiffness and strength. Its mechanical constitutive equation links Bishop’s effective stress with strain and suction, featuring a normal consolidation line (NCL) with a compression index (λ) presenting a non-linear dependency with the degree of saturation. The Bulk modulus has been modified to ensure that under rapid volumetric collapse, the stress state remains at the NCL. The coupled model comprises eighteen parameters, with nine for the hydraulic component and nine for the mechanical component. Hydraulic parameters are calibrated with the use of water retention curves (IWRC) across varied soil densities, while mechanical parameters undergo calibration using isotropic and triaxial tests on both unsaturated and saturated samples. The model's performance is analyzed through the back-calculation of two experimental studies: (i) wetting under different vertical stresses for Lower Cromer Till and (ii) isotropic loading and triaxial loading for undisturbed loess. The results confirm that the proposed model is able to predict the hydro-mechanical behavior of unsaturated soils.

Keywords: hypoplastic model, volumetric collapse, normal consolidation line, compression index (λ), degree of saturation, soil suction

Procedia PDF Downloads 57
18284 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 195
18283 Cognitive Footprints: Analytical and Predictive Paradigm for Digital Learning

Authors: Marina Vicario, Amadeo Argüelles, Pilar Gómez, Carlos Hernández

Abstract:

In this paper, the Computer Research Network of the National Polytechnic Institute of Mexico proposes a paradigmatic model for the inference of cognitive patterns in digital learning systems. This model leads to metadata architecture useful for analysis and prediction in online learning systems; especially on MOOc's architectures. The model is in the design phase and expects to be tested through an institutional of courses project which is going to develop for the MOOc.

Keywords: cognitive footprints, learning analytics, predictive learning, digital learning, educational computing, educational informatics

Procedia PDF Downloads 475
18282 Creeping Control Strategy for Direct Shift Gearbox Based on the Investigation of Temperature Variation of the Wet Clutch

Authors: Biao Ma, Jikai Liu, Man Chen, Jianpeng Wu, Liyong Wang, Changsong Zheng

Abstract:

Proposing an appropriate control strategy is an effective and practical way to address the overheat problems of the wet multi-plate clutch in Direct Shift Gearbox under the long-time creeping condition. To do so, the temperature variation of the wet multi-plate clutch is investigated firstly by establishing a thermal resistance model for the gearbox cooling system. To calculate the generated heat flux and predict the clutch temperature precisely, the friction torque model is optimized by introducing an improved friction coefficient, which is related to the pressure, the relative speed and the temperature. After that, the heat transfer model and the reasonable friction torque model are employed by the vehicle powertrain model to construct a comprehensive co-simulation model for the Direct Shift Gearbox (DSG) vehicle. A creeping control strategy is then proposed and, to evaluate the vehicle performance, the safety temperature (250 ℃) is particularly adopted as an important metric. During the creeping process, the temperature of two clutches is always under the safety value (250 ℃), which demonstrates the effectiveness of the proposed control strategy in avoiding the thermal failures of clutches.

Keywords: creeping control strategy, direct shift gearbox, temperature variation, wet clutch

Procedia PDF Downloads 131
18281 Reliability Evaluation of a Payment Model in Mobile E-Commerce Using Colored Petri Net

Authors: Abdolghader Pourali, Mohammad V. Malakooti, Muhammad Hussein Yektaie

Abstract:

A mobile payment system in mobile e-commerce generally have high security so that the user can trust it for doing business deals, sales, paying financial transactions, etc. in the mobile payment system. Since an architecture or payment model in e-commerce only shows the way of interaction and collaboration among users and mortgagers and does not present any evaluation of effectiveness and confidence about financial transactions to stakeholders. In this paper, we try to present a detailed assessment of the reliability of a mobile payment model in the mobile e-commerce using formal models and colored Petri nets. Finally, we demonstrate that the reliability of this system has high value (case study: a secure payment model in mobile commerce.

Keywords: reliability, colored Petri net, assessment, payment models, m-commerce

Procedia PDF Downloads 535
18280 A New Categorization of Image Quality Metrics Based on a Model of Human Quality Perception

Authors: Maria Grazia Albanesi, Riccardo Amadeo

Abstract:

This study presents a new model of the human image quality assessment process: the aim is to highlight the foundations of the image quality metrics proposed in literature, by identifying the cognitive/physiological or mathematical principles of their development and the relation with the actual human quality assessment process. The model allows to create a novel categorization of objective and subjective image quality metrics. Our work includes an overview of the most used or effective objective metrics in literature, and, for each of them, we underline its main characteristics, with reference to the rationale of the proposed model and categorization. From the results of this operation, we underline a problem that affects all the presented metrics: the fact that many aspects of human biases are not taken in account at all. We then propose a possible methodology to address this issue.

Keywords: eye-tracking, image quality assessment metric, MOS, quality of user experience, visual perception

Procedia PDF Downloads 406
18279 Automated Tracking and Statistics of Vehicles at the Signalized Intersection

Authors: Qiang Zhang, Xiaojian Hu1

Abstract:

Intersection is the place where vehicles and pedestrians must pass through, turn and evacuate. Obtaining the motion data of vehicles near the intersection is of great significance for transportation research. Since there are usually many targets and there are more conflicts between targets, this makes it difficult to obtain vehicle motion parameters in traffic videos of intersections. According to the characteristics of traffic videos, this paper applies video technology to realize the automated track, count and trajectory extraction of vehicles to collect traffic data by roadside surveillance cameras installed near the intersections. Based on the video recognition method, the vehicles in each lane near the intersection are tracked with extracting trajectory and counted respectively in various degrees of occlusion and visibility. The performances are compared with current recognized CPU-based algorithms of real-time tracking-by-detection. The speed of the presented system is higher than the others and the system has a better real-time performance. The accuracy of direction has reached about 94.99% on average, and the accuracy of classification and statistics has reached about 75.12% on average.

Keywords: tracking and statistics, vehicle, signalized intersection, motion parameter, trajectory

Procedia PDF Downloads 214
18278 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 70
18277 3D Hybrid Multiphysics Lattice Boltzmann Model for Studying the Flow Behavior of Emulsions in Structured Rectangular Microchannels

Authors: Luma Al-Tamimi, Hassan Farhat, Wessam Hasan

Abstract:

A three-dimensional (3D) hybrid quasi-steady thermal lattice Boltzmann model is developed to couple the effects of surfactant, temperature, interfacial tension, and contact angle. This 3D model is an extended scheme of a previously introduced two-dimensional (2D) hybrid lattice Boltzmann model. The 3D model is used to study the combined multi-physics effects on emulsion systems flowing in rectangular microchannels with and without confinements, where the suspended phase is made of droplets, plugs, or a mixture of both. The simulation results show that emulsion systems with plugs as the suspended phase are more efficient than with droplets, whereas mixed systems that form large plugs through coalescence have even greater efficiency. The 3D contact angle model generates matching results to those of the 2D model, which were validated with experiments. Furthermore, the effects of various confinements on adhering single drop systems are investigated for delineating their influence on the power required for transporting the suspended phase through the channel. It is shown that the deeper the constriction is, the lower the system efficiency. Increasing the surfactant concentration or fluid temperature in a channel with confinement carries a substantial positive effect on oil droplet transportation.

Keywords: lattice Boltzmann method, thermal, contact angle, surfactants, high viscosity ratio, porous media

Procedia PDF Downloads 173
18276 Performance Evaluation of Refinement Method for Wideband Two-Beams Formation

Authors: C. Bunsanit

Abstract:

This paper presents the refinement method for two beams formation of wideband smart antenna. The refinement method for weighting coefficients is based on Fully Spatial Signal Processing by taking Inverse Discrete Fourier Transform (IDFT), and its simulation results are presented using MATLAB. The radiation pattern is created by multiplying the incoming signal with real weights and then summing them together. These real weighting coefficients are computed by IDFT method; however, the range of weight values is relatively wide. Therefore, for reducing this range, the refinement method is used. The radiation pattern concerns with five input parameters to control. These parameters are maximum weighting coefficient, wideband signal, direction of mainbeam, beamwidth, and maximum of minor lobe level. Comparison of the obtained simulation results between using refinement method and taking only IDFT shows that the refinement method works well for wideband two beams formation.

Keywords: fully spatial signal processing, beam forming, refinement method, smart antenna, weighting coefficient, wideband

Procedia PDF Downloads 222
18275 Holistic Risk Assessment Based on Continuous Data from the User’s Behavior and Environment

Authors: Cinzia Carrodano, Dimitri Konstantas

Abstract:

Risk is part of our lives. In today’s society risk is connected to our safety and safety has become a major priority in our life. Each person lives his/her life based on the evaluation of the risk he/she is ready to accept and sustain, and the level of safety he/she wishes to reach, based on highly personal criteria. The assessment of risk a person takes in a complex environment and the impact of actions of other people’actions and events on our perception of risk are alements to be considered. The concept of Holistic Risk Assessment (HRA) aims in developing a methodology and a model that will allow us to take into account elements outside the direct influence of the individual, and provide a personalized risk assessment. The concept is based on the fact that in the near future, we will be able to gather and process extremely large amounts of data about an individual and his/her environment in real time. The interaction and correlation of these data is the key element of the holistic risk assessment. In this paper, we present the HRA concept and describe the most important elements and considerations.

Keywords: continuous data, dynamic risk, holistic risk assessment, risk concept

Procedia PDF Downloads 123
18274 Human Machine Interface for Controlling a Robot Using Image Processing

Authors: Ambuj Kumar Gautam, V. Vasu

Abstract:

This paper introduces a head movement based Human Machine Interface (HMI) that uses the right and left movements of head to control a robot motion. Here we present an approach for making an effective technique for real-time face orientation information system, to control a robot which can be efficiently used for Electrical Powered Wheelchair (EPW). Basically this project aims at application related to HMI. The system (machine) identifies the orientation of the face movement with respect to the pixel values of image in a certain areas. Initially we take an image and divide that whole image into three parts on the basis of its number of columns. On the basis of orientation of face, maximum pixel value of approximate same range of (R, G, and B value of a pixel) lie in one of divided parts of image. This information we transfer to the microcontroller through serial communication port and control the motion of robot like forward motion, left and right turn and stop in real time by using head movements.

Keywords: electrical powered wheelchair (EPW), human machine interface (HMI), robotics, microcontroller

Procedia PDF Downloads 286
18273 Reaching Students Who “Don’t Like Writing” through Scenario Based Learning

Authors: Shahira Mahmoud Yacout

Abstract:

Writing is an essential skill in many vocational, academic environments, and notably workplaces, yet many students perceive writing as being something tiring and boring or maybe a “waste of time”. Studies in the field of foreign languages related this fact might be due to the lack of connection between what is learned in the university and what students come to encounter in real life situations”. Arabic learners felt they needed more language exposure to the context of their future professions. With this idea in mind, Scenario based learning (SBL) is reported to be an educational approach to motivate, engage and stimulate students’ interest and to achieve the desired writing learning outcomes. In addition, researchers suggested Scenario based learning (SBL)as an instructional approach that develops and enhances students skills through developing higher order thinking skills and active learning. It is a subset of problem-based learning and case-based learning. The approach focuses on authentic rhetorical framing reflecting writing tasks in real life situations. It works successfully when used to simulate real-world practices, providing context that reflects the types of situations professionals respond to in writing. It was claimed that using realistic scenarios customized to the course’s learning objectives as it bridged the gap for students between theory and application. Within this context, it is thought that scenario-based learning is an important approach to enhance the learners’ writing skills and to reflect meaningful learning within authentic contexts. As an Arabicforeign language instructor, it was noticed that students find difficulties in adapting writing styles to authentic writing contexts and addressing different audiences and purposes. This idea is supported by studieswho claimed that AFL students faced difficulties with transferring writing skills to situations outside of the classroom context. In addition, it was observed that some of the Arabic textbooks for teaching Arabic as a foreign language lacked topics that initiated higher order thinking skills and stimulated the learners to understand the setting, and created messages appropriate to different audiences, context, and purposes. The goals of this study are to 1)provide a rational for using scenario-based learning approach to improveAFL learners in writing skills, 2) demonstrate how to design/ implement a scenario-based learning technique aligned with the writing course objectives,3) demonstrate samples of scenario-based approach implemented in AFL writing class, and 4)emphasis the role of peer-review along with the instructor’s feedback, in the process of developing the writing skill. Finally, this presentation highlighted and emphasized the importance of using the scenario-based learning approach in writing as a means to mirror students’ real-life situations and engage them in planning, monitoring, and problem solving. This approach helped in making writing an enjoyable experience and clearly useful to students’ future professional careers.

Keywords: meaningful learning, real life contexts, scenario based learning, writing skill

Procedia PDF Downloads 97
18272 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations

Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman

Abstract:

Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.

Keywords: block, backward differentiation formulas, first order, fuzzy differential equations

Procedia PDF Downloads 314
18271 RGB-D SLAM Algorithm Based on pixel level Dense Depth Map

Authors: Hao Zhang, Hongyang Yu

Abstract:

Scale uncertainty is a well-known challenging problem in visual SLAM. Because RGB-D sensor provides depth information, RGB-D SLAM improves this scale uncertainty problem. However, due to the limitation of physical hardware, the depth map output by RGB-D sensor usually contains a large area of missing depth values. These missing depth information affect the accuracy and robustness of RGB-D SLAM. In order to reduce these effects, this paper completes the missing area of the depth map output by RGB-D sensor and then fuses the completed dense depth map into ORB SLAM2. By adding the process of obtaining pixel-level dense depth maps, a better RGB-D visual SLAM algorithm is finally obtained. In the process of obtaining dense depth maps, a deep learning model of indoor scenes is adopted. Experiments are conducted on public datasets and real-world environments of indoor scenes. Experimental results show that the proposed SLAM algorithm has better robustness than ORB SLAM2.

Keywords: RGB-D, SLAM, dense depth, depth map

Procedia PDF Downloads 138
18270 Smart Lean Manufacturing in the Context of Industry 4.0: A Case Study

Authors: M. Ramadan, B. Salah

Abstract:

This paper introduces a framework to digitalize lean manufacturing tools to enhance smart lean-based manufacturing environments or Lean 4.0 manufacturing systems. The paper discusses the integration between lean tools and the powerful features of recent real-time data capturing systems with the help of Information and Communication Technologies (ICT) to develop an intelligent real-time monitoring and controlling system of production operations concerning lean targets. This integration is represented in the Lean 4.0 system called Dynamic Value Stream Mapping (DVSM). Moreover, the paper introduces the practice of Radio Frequency Identification (RFID) and ICT to smartly support lean tools and practices during daily production runs to keep the lean system alive and effective. This work introduces a practical description of how the lean method tools 5S, standardized work, and poka-yoke can be digitalized and smartly monitored and controlled through DVSM. A framework of the three tools has been discussed and put into practice in a German switchgear manufacturer.

Keywords: lean manufacturing, Industry 4.0, radio frequency identification, value stream mapping

Procedia PDF Downloads 223
18269 Generating Product Description with Generative Pre-Trained Transformer 2

Authors: Minh-Thuan Nguyen, Phuong-Thai Nguyen, Van-Vinh Nguyen, Quang-Minh Nguyen

Abstract:

Research on automatically generating descriptions for e-commerce products is gaining increasing attention in recent years. However, the generated descriptions of their systems are often less informative and attractive because of lacking training datasets or the limitation of these approaches, which often use templates or statistical methods. In this paper, we explore a method to generate production descriptions by using the GPT-2 model. In addition, we apply text paraphrasing and task-adaptive pretraining techniques to improve the qualify of descriptions generated from the GPT-2 model. Experiment results show that our models outperform the baseline model through automatic evaluation and human evaluation. Especially, our methods achieve a promising result not only on the seen test set but also in the unseen test set.

Keywords: GPT-2, product description, transformer, task-adaptive, language model, pretraining

Procedia PDF Downloads 194
18268 Concept, Design and Implementation of Power System Component Simulator Based on Thyristor Controlled Transformer and Power Converter

Authors: B. Kędra, R. Małkowski

Abstract:

This paper presents information on Power System Component Simulator – a device designed for LINTE^2 laboratory owned by Gdansk University of Technology in Poland. In this paper, we first provide an introductory information on the Power System Component Simulator and its capabilities. Then, the concept of the unit is presented. Requirements for the unit are described as well as proposed and introduced functions are listed. Implementation details are given. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Lastly, the results of experiments performed using Power System Component Simulator are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area.

Keywords: power converter, Simulink Real-Time, Matlab, load, tap controller

Procedia PDF Downloads 239
18267 Predicting Depth of Penetration in Abrasive Waterjet Cutting of Polycrystalline Ceramics

Authors: S. Srinivas, N. Ramesh Babu

Abstract:

This paper presents a model to predict the depth of penetration in polycrystalline ceramic material cut by abrasive waterjet. The proposed model considered the interaction of cylindrical jet with target material in upper region and neglected the role of threshold velocity in lower region. The results predicted with the proposed model are validated with the experimental results obtained with Silicon Carbide (SiC) blocks.

Keywords: abrasive waterjet cutting, analytical modeling, ceramics, micro-cutting and inter-grannular cracking

Procedia PDF Downloads 303
18266 Cross-Knowledge Graph Relation Completion for Non-Isomorphic Cross-Lingual Entity Alignment

Authors: Yuhong Zhang, Dan Lu, Chenyang Bu, Peipei Li, Kui Yu, Xindong Wu

Abstract:

The Cross-Lingual Entity Alignment (CLEA) task aims to find the aligned entities that refer to the same identity from two knowledge graphs (KGs) in different languages. It is an effective way to enhance the performance of data mining for KGs with scarce resources. In real-world applications, the neighborhood structures of the same entities in different KGs tend to be non-isomorphic, which makes the representation of entities contain diverse semantic information and then poses a great challenge for CLEA. In this paper, we try to address this challenge from two perspectives. On the one hand, the cross-KG relation completion rules are designed with the alignment constraint of entities and relations to improve the topology isomorphism of two KGs. On the other hand, a representation method combining isomorphic weights is designed to include more isomorphic semantics for counterpart entities, which will benefit the CLEA. Experiments show that our model can improve the isomorphism of two KGs and the alignment performance, especially for two non-isomorphic KGs.

Keywords: knowledge graphs, cross-lingual entity alignment, non-isomorphic, relation completion

Procedia PDF Downloads 117
18265 Identifying Diabetic Retinopathy Complication by Predictive Techniques in Indian Type 2 Diabetes Mellitus Patients

Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad

Abstract:

Predicting the risk of diabetic retinopathy (DR) in Indian type 2 diabetes patients is immensely necessary. India, being the second largest country after China in terms of a number of diabetic patients, to the best of our knowledge not a single risk score for complications has ever been investigated. Diabetic retinopathy is a serious complication and is the topmost reason for visual impairment across countries. Any type or form of DR has been taken as the event of interest, be it mild, back, grade I, II, III, and IV DR. A sample was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of DR. Cox proportional hazard regression is used to design risk scores for the prediction of retinopathy. Model calibration and discrimination are assessed from Hosmer Lemeshow and area under receiver operating characteristic curve (ROC). Overfitting and underfitting of the model are checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Optimal cut off point is chosen by Youden’s index. Five-year probability of DR is predicted by both survival function, and Markov chain two state model and the better technique is concluded. The risk scores developed can be applied by doctors and patients themselves for self evaluation. Furthermore, the five-year probabilities can be applied as well to forecast and maintain the condition of patients. This provides immense benefit in real application of DR prediction in T2DM.

Keywords: Cox proportional hazard regression, diabetic retinopathy, ROC curve, type 2 diabetes mellitus

Procedia PDF Downloads 181
18264 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 150
18263 Machine Learning for Feature Selection and Classification of Systemic Lupus Erythematosus

Authors: H. Zidoum, A. AlShareedah, S. Al Sawafi, A. Al-Ansari, B. Al Lawati

Abstract:

Systemic lupus erythematosus (SLE) is an autoimmune disease with genetic and environmental components. SLE is characterized by a wide variability of clinical manifestations and a course frequently subject to unpredictable flares. Despite recent progress in classification tools, the early diagnosis of SLE is still an unmet need for many patients. This study proposes an interpretable disease classification model that combines the high and efficient predictive performance of CatBoost and the model-agnostic interpretation tools of Shapley Additive exPlanations (SHAP). The CatBoost model was trained on a local cohort of 219 Omani patients with SLE as well as other control diseases. Furthermore, the SHAP library was used to generate individual explanations of the model's decisions as well as rank clinical features by contribution. Overall, we achieved an AUC score of 0.945, F1-score of 0.92 and identified four clinical features (alopecia, renal disorders, cutaneous lupus, and hemolytic anemia) along with the patient's age that was shown to have the greatest contribution on the prediction.

Keywords: feature selection, classification, systemic lupus erythematosus, model interpretation, SHAP, Catboost

Procedia PDF Downloads 78
18262 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 225
18261 Use of PACER Application as Physical Activity Assessment Tool: Results of a Reliability and Validity Study

Authors: Carine Platat, Fatima Qshadi, Ghofran Kayed, Nour Hussein, Amjad Jarrar, Habiba Ali

Abstract:

Nowadays, smartphones are very popular. They are offering a variety of easy-to-use and free applications among which step counters and fitness tests. The number of users is huge making of such applications a potentially efficient new strategy to encourage people to become more active. Nonetheless, data on their reliability and validity are very scarce and when available, they are often negative and contradictory. Besides, weight status, which is likely to introduce a bias in the physical activity assessment, was not often considered. Hence, the use of these applications as motivational tool, assessment tool and in research is questionable. PACER is one of the free step counters application. Even though it is one of the best rated free application by users, it has never been tested for reliability and validity. Prior any use of PACER, this remains to be investigated. The objective of this work is to investigate the reliability and validity of the smartphone application PACER in measuring the number of steps and in assessing the cardiorespiratory fitness by the 6 minutes walking test. 20 overweight or obese students (10 male and 10 female) were recruited at the United Arab Emirate University, aged between 18 and 25 years old. Reliability and validity were tested in real life conditions and in controlled conditions by using a treadmill. Test-retest experiments were done with PACER on 2 days separated by a week in real life conditions (24 hours each time) and in controlled conditions (30 minutes on treadmill, 3km/h). Validity was tested against the pedometer OMRON in the same conditions. During treadmill test, video was recorded and steps numbers were compared between PACER, pedometer and video. The validity of PACER in estimating the cardiorespiratory fitness (VO2max) as part of the 6 minutes walking test (6MWT) was studied against the 20m shuttle running test. Reliability was studied by calculating intraclass correlation coefficients (ICC), 95% confidence interval (95%CI) and by Bland-Altman plots. Validity was studied by calculating Spearman correlation coefficient (rho) and Bland-Altman plots. PACER reliability was good in both male and female in real life conditions (p≤10-3) but only in female in controlled conditions (p=0.01). PACER was valid against OMRON pedometer in male and female in real life conditions (rho=0.94, p≤10-3 ; rho=0.64, p=0.01, in male and female respectively). In controlled conditions, PACER was not valid against pedometer. But, PACER was valid against video in female (rho=0.72, p≤10-3). PACER was valid against the shuttle run test in male and female (rho-=0.66, p=0.01 ; rho=0.51, p=0.04) to estimate VO2max. This study provides data on the reliability and viability of PACER in overweight or obese male and female young adults. Globally, PACER was shown as reliable and valid in real life conditions in overweight or obese male and female to count steps and assess fitness. This supports the use of PACER to assess and promote physical activity in clinical follow-up and community interventions.

Keywords: smartphone application, pacer, reliability, validity, steps, fitness, physical activity

Procedia PDF Downloads 450
18260 Application of the Building Information Modeling Planning Approach to the Factory Planning

Authors: Peggy Näser

Abstract:

Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.

Keywords: building information modeling, digital factory, digital planning, factory planning

Procedia PDF Downloads 262
18259 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 230