Search results for: damage prediction models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10074

Search results for: damage prediction models

9564 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, pasta, moisture determination, food engineering

Procedia PDF Downloads 240
9563 Relationship of Oxidative Stress to Elevated Homocysteine and DNA Damage in Coronary Artery Disease Patients

Authors: Shazia Anwer Bukhari, Madiha Javeed Ghani, Muhammad Ibrahim Rajoka

Abstract:

Objective: Biochemical, environmental, physical and genetic factors have a strong effect on the development of coronary disease (CAD). Plasma homocysteine (Hcy) level and DNA damage play a pivotal role in its development and progression. The aim of this study was to investigate the predictive strength of an oxidative stress, clinical biomarkers and total antioxidant status (TAS) in CAD patients to find the correlation of homocysteine, TOS and oxidative DNA damage with other clinical parameters. Methods: Sixty confirmed patients with CAD and 60 healthy individuals as control were included in this study. Different clinical and laboratory parameters were studied in blood samples obtained from patients and control subjects using commercially available biochemical kits and statistical software Results: As compared to healthy individuals, CAD patients had significantly higher concentrations of indices of oxidative stress: homocysteine (P=0.0001), total oxidative stress (TOS) (P=0.0001), serum cholesterol (P=0.04), low density lipoprotein cholesterol (LDL) (P=0.01), high density lipoprotein-cholesterol (HDL) (P=0.0001), and malondialdehyde (MDA) (P=0.001) than those of healthy individuals. Plasma homocysteine level and oxidative DNA damage were positively correlated with cholesterol, triglycerides, systolic blood pressure, urea, total protein and albumin (P values= 0.05). Both Hcy and oxidative DNA damage were negatively correlated with TAS and proteins. Conclusion: Coronary artery disease patients had a significant increase in homocysteine level and DNA damage due to increased oxidative stress. In conclusion, our study shows a significantly increase in lipid peroxidation, TOS, homocysteine and DNA damage in the erythrocytes of patients with CAD. A significant decrease level of HDL-C and TAS was observed only in CAD patients. Therefore these biomarkers may be useful diagnosis of patients with CAD and play an important role in the pathogenesis of CAD.

Keywords: antioxidants, coronary artery disease, DNA damage, homocysteine, oxidative stress, malondialdehyde, 8-Hydroxy-2’deoxyguanosine

Procedia PDF Downloads 471
9562 Environmental Quality in Urban Areas: Legal Aspect and Institutional Dimension: A Case Study of Algeria

Authors: Youcef Lakhdar Hamina

Abstract:

In order to tame the ecological damage specificity, it is imperative to assert the procedural and objective liability aspect, which leads us to analyse current trends based on the development of preventive civil liability based on the precautionary principle. Our research focuses on the instruments of the environment protection in urban areas based on two complementary aspects appearing contradictory and refer directly to the institutional dimensions: - The preventive aspect: considered as a main objective of the environmental policy which highlights the different legal mechanisms for the environment protection by highlighting the role of administration in its implementation (environmental planning, tax incentives, modes of participation of all actors, etc.). - The healing-repressive aspect: considered as an approach for the identification of ecological damage and the forms of reparation (spatial and temporal-responsibility) to the impossibility of predicting with rigor and precision, the appearance of ecological damage, which cannot be avoided.

Keywords: environmental law, environmental taxes, environmental damage, eco responsibility, precautionary principle, environmental management

Procedia PDF Downloads 390
9561 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics

Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy

Abstract:

Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.

Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance

Procedia PDF Downloads 124
9560 Runoff Simulation by Using WetSpa Model in Garmabrood Watershed of Mazandaran Province, Iran

Authors: Mohammad Reza Dahmardeh Ghaleno, Mohammad Nohtani, Saeedeh Khaledi

Abstract:

Hydrological models are applied to simulation and prediction floods in watersheds. WetSpa is a distributed, continuous and physically model with daily or hourly time step that explains of precipitation, runoff and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave Equation which depend on the slope, velocity and flow route characteristics. Garmabrood watershed located in Mazandaran province in Iran and passing over coordinates 53° 10´ 55" to 53° 38´ 20" E and 36° 06´ 45" to 36° 25´ 30"N. The area of the catchment is about 1133 km2 and elevations in the catchment range from 213 to 3136 m at the outlet, with average slope of 25.77 %. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe Model Efficiency Coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 61% and 83.17 % respectively.

Keywords: watershed simulation, WetSpa, runoff, flood prediction

Procedia PDF Downloads 318
9559 Resale Housing Development Board Price Prediction Considering Covid-19 through Sentiment Analysis

Authors: Srinaath Anbu Durai, Wang Zhaoxia

Abstract:

Twitter sentiment has been used as a predictor to predict price values or trends in both the stock market and housing market. The pioneering works in this stream of research drew upon works in behavioural economics to show that sentiment or emotions impact economic decisions. Latest works in this stream focus on the algorithm used as opposed to the data used. A literature review of works in this stream through the lens of data used shows that there is a paucity of work that considers the impact of sentiments caused due to an external factor on either the stock or the housing market. This is despite an abundance of works in behavioural economics that show that sentiment or emotions caused due to an external factor impact economic decisions. To address this gap, this research studies the impact of Twitter sentiment pertaining to the Covid-19 pandemic on resale Housing Development Board (HDB) apartment prices in Singapore. It leverages SNSCRAPE to collect tweets pertaining to Covid-19 for sentiment analysis, lexicon based tools VADER and TextBlob are used for sentiment analysis, Granger Causality is used to examine the relationship between Covid-19 cases and the sentiment score, and neural networks are leveraged as prediction models. Twitter sentiment pertaining to Covid-19 as a predictor of HDB price in Singapore is studied in comparison with the traditional predictors of housing prices i.e., the structural and neighbourhood characteristics. The results indicate that using Twitter sentiment pertaining to Covid19 leads to better prediction than using only the traditional predictors and performs better as a predictor compared to two of the traditional predictors. Hence, Twitter sentiment pertaining to an external factor should be considered as important as traditional predictors. This paper demonstrates the real world economic applications of sentiment analysis of Twitter data.

Keywords: sentiment analysis, Covid-19, housing price prediction, tweets, social media, Singapore HDB, behavioral economics, neural networks

Procedia PDF Downloads 89
9558 Applying the Regression Technique for ‎Prediction of the Acute Heart Attack ‎

Authors: Paria Soleimani, Arezoo Neshati

Abstract:

Myocardial infarction is one of the leading causes of ‎death in the world. Some of these deaths occur even before the patient ‎reaches the hospital. Myocardial infarction occurs as a result of ‎impaired blood supply. Because the most of these deaths are due to ‎coronary artery disease, hence the awareness of the warning signs of a ‎heart attack is essential. Some heart attacks are sudden and intense, but ‎most of them start slowly, with mild pain or discomfort, then early ‎detection and successful treatment of these symptoms is vital to save ‎them. Therefore, importance and usefulness of a system designing to ‎assist physicians in the early diagnosis of the acute heart attacks is ‎obvious.‎ The purpose of this study is to determine how well a predictive ‎model would perform based on the only patient-reportable clinical ‎history factors, without using diagnostic tests or physical exams. This ‎type of the prediction model might have application outside of the ‎hospital setting to give accurate advice to patients to influence them to ‎seek care in appropriate situations. For this purpose, the data were ‎collected on 711 heart patients in Iran hospitals. 28 attributes of clinical ‎factors can be reported by patients; were studied. Three logistic ‎regression models were made on the basis of the 28 features to predict ‎the risk of heart attacks. The best logistic regression model in terms of ‎performance had a C-index of 0.955 and with an accuracy of 94.9%. ‎The variables, severe chest pain, back pain, cold sweats, shortness of ‎breath, nausea, and vomiting were selected as the main features.‎

Keywords: Coronary heart disease, Acute heart attacks, Prediction, Logistic ‎regression‎

Procedia PDF Downloads 434
9557 General Mathematical Framework for Analysis of Cattle Farm System

Authors: Krzysztof Pomorski

Abstract:

In the given work we present universal mathematical framework for modeling of cattle farm system that can set and validate various hypothesis that can be tested against experimental data. The presented work is preliminary but it is expected to be valid tool for future deeper analysis that can result in new class of prediction methods allowing early detection of cow dieseaes as well as cow performance. Therefore the presented work shall have its meaning in agriculture models and in machine learning as well. It also opens the possibilities for incorporation of certain class of biological models necessary in modeling of cow behavior and farm performance that might include the impact of environment on the farm system. Particular attention is paid to the model of coupled oscillators that it the basic building hypothesis that can construct the model showing certain periodic or quasiperiodic behavior.

Keywords: coupled ordinary differential equations, cattle farm system, numerical methods, stochastic differential equations

Procedia PDF Downloads 129
9556 Effect of Tensile Strain on Microstructure of Irradiated Core Internal Material

Authors: Hygreeva Kiran Namburi, Anna Hojna, Edita Lecianova, Fencl Zdenek

Abstract:

Irradiation Assisted Stress Corrosion Cracking [IASCC] is one of the most significant environmental degradation in the internal components made from Austenitic stainless steel. This mechanism is still not fully understood and there are no suitable criteria for prediction of the damage during operation. In this work, core basket material 08Ch18N10T austenitic stainless steel acquired from decommissioned NPP Nord / Greifswald Unit 1, VVER 440-230 type, operated for 15 years and irradiated at 5.2 dpa is studied. This material was tensile tested at two different test temperatures and strain rates in air and at the elevated temperature under the water environment. SEM observations of the fracture surface documented ductile fracture of the samples tested in air, but areas of IASCC tested in water. This paper emphasizes on the microscopic examination results from the mechanically tested samples to determine the underlying IASCC physical damage process. TEM observations of thin foils made from the gauge sections that are closer to the fractured surface of the specimen aimed to find variances in interaction of dislocations and grain boundaries owing to different test conditions.

Keywords: irradiation assisted stress corrosion cracking, core basket material, SEM observations of the fracture surface, microscopic examination results

Procedia PDF Downloads 334
9555 Deadline Missing Prediction for Mobile Robots through the Use of Historical Data

Authors: Edwaldo R. B. Monteiro, Patricia D. M. Plentz, Edson R. De Pieri

Abstract:

Mobile robotics is gaining an increasingly important role in modern society. Several potentially dangerous or laborious tasks for human are assigned to mobile robots, which are increasingly capable. Many of these tasks need to be performed within a specified period, i.e., meet a deadline. Missing the deadline can result in financial and/or material losses. Mechanisms for predicting the missing of deadlines are fundamental because corrective actions can be taken to avoid or minimize the losses resulting from missing the deadline. In this work we propose a simple but reliable deadline missing prediction mechanism for mobile robots through the use of historical data and we use the Pioneer 3-DX robot for experiments and simulations, one of the most popular robots in academia.

Keywords: deadline missing, historical data, mobile robots, prediction mechanism

Procedia PDF Downloads 382
9554 Useful Lifetime Prediction of Rail Pads for High Speed Trains

Authors: Chang Su Woo, Hyun Sung Park

Abstract:

Useful lifetime evaluations of rail-pads were very important in design procedure to assure the safety and reliability. It is, therefore, necessary to establish a suitable criterion for the replacement period of rail pads. In this study, we performed properties and accelerated heat aging tests of rail pads considering degradation factors and all environmental conditions including operation, and then derived a lifetime prediction equation according to changes in hardness, thickness, and static spring constants in the Arrhenius plot to establish how to estimate the aging of rail pads. With the useful lifetime prediction equation, the lifetime of e-clip pads was 2.5 years when the change in hardness was 10% at 25°C; and that of f-clip pads was 1.7 years. When the change in thickness was 10%, the lifetime of e-clip pads and f-clip pads is 2.6 years respectively. The results obtained in this study to estimate the useful lifetime of rail pads for high speed trains can be used for determining the maintenance and replacement schedule for rail pads.

Keywords: rail pads, accelerated test, Arrhenius plot, useful lifetime prediction, mechanical engineering design

Procedia PDF Downloads 301
9553 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 119
9552 MXene-Based Self-Sensing of Damage in Fiber Composites

Authors: Latha Nataraj, Todd Henry, Micheal Wallock, Asha Hall, Christine Hatter, Babak Anasori, Yury Gogotsi

Abstract:

Multifunctional composites with enhanced strength and toughness for superior damage tolerance are essential for advanced aerospace and military applications. Detection of structural changes prior to visible damage may be achieved by incorporating fillers with tunable properties such as two-dimensional (2D) nanomaterials with high aspect ratios and more surface-active sites. While 2D graphene with large surface areas, good mechanical properties, and high electrical conductivity seems ideal as a filler, the single-atomic thickness can lead to bending and rolling during processing, requiring post-processing to bond to polymer matrices. Lately, an emerging family of 2D transition metal carbides and nitrides, MXenes, has attracted much attention since their discovery in 2011. Metallic electronic conductivity and good mechanical properties, even with increased polymer content, coupled with hydrophilicity make MXenes a good candidate as a filler material in polymer composites and exceptional as multifunctional damage indicators in composites. Here, we systematically study MXene-based (Ti₃C₂) coated on glass fibers for fiber reinforced polymer composite for self-sensing using microscopy and micromechanical testing. Further testing is in progress through the investigation of local variations in optical, acoustic, and thermal properties within the damage sites in response to strain caused by mechanical loading.

Keywords: damage sensing, fiber composites, MXene, self-sensing

Procedia PDF Downloads 106
9551 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 300
9550 The Ability of Forecasting the Term Structure of Interest Rates Based on Nelson-Siegel and Svensson Model

Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović

Abstract:

Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector auto-regressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is neural networks using Nelson-Siegel estimation of yield curves.

Keywords: Nelson-Siegel Model, neural networks, Svensson Model, vector autoregressive model, yield curve

Procedia PDF Downloads 301
9549 Analysis of Tactile Perception of Textiles by Fingertip Skin Model

Authors: Izabela L. Ciesielska-Wrόbel

Abstract:

This paper presents finite element models of the fingertip skin which have been created to simulate the contact of textile objects with the skin to gain a better understanding of the perception of textiles through the skin, so-called Hand of Textiles (HoT). Many objective and subjective techniques have been developed to analyze HoT, however none of them provide exact overall information concerning the sensation of textiles through the skin. As the human skin is a complex heterogeneous hyperelastic body composed of many particles, some simplifications had to be made at the stage of building the models. The same concerns models of woven structures, however their utilitarian value was maintained. The models reflect only friction between skin and woven textiles, deformation of the skin and fabrics when “touching” textiles and heat transfer from the surface of the skin into direction of textiles.

Keywords: fingertip skin models, finite element models, modelling of textiles, sensation of textiles through the skin

Procedia PDF Downloads 450
9548 Reducing Uncertainty of Monte Carlo Estimated Fatigue Damage in Offshore Wind Turbines Using FORM

Authors: Jan-Tore H. Horn, Jørgen Juncher Jensen

Abstract:

Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue estimations may be improved for the same computational efforts. The method is applied to a bottom-fixed, monopile-supported large offshore wind turbine, which is a non-linear and dynamically sensitive system. Different curve fitting techniques to the fatigue damage distribution have been used depending on the sea-state dependent response characteristics, and the effect of a bi-linear S-N curve is discussed. Finally, analyses are performed on several environmental conditions to investigate the long-term applicability of this multistep method. Wave loads are calculated using state-of-the-art theory, while wind loads are applied with a simplified model based on rotor thrust coefficients.

Keywords: fatigue damage, FORM, monopile, Monte Carlo, simulation, wind turbine

Procedia PDF Downloads 241
9547 Analysis of Atomic Models in High School Physics Textbooks

Authors: Meng-Fei Cheng, Wei Fneg

Abstract:

New Taiwan high school standards emphasize employing scientific models and modeling practices in physics learning. However, to our knowledge. Few studies address how scientific models and modeling are approached in current science teaching, and they do not examine the views of scientific models portrayed in the textbooks. To explore the views of scientific models and modeling in textbooks, this study investigated the atomic unit in different textbook versions as an example and provided suggestions for modeling curriculum. This study adopted a quantitative analysis of qualitative data in the atomic units of four mainstream version of Taiwan high school physics textbooks. The models were further analyzed using five dimensions of the views of scientific models (nature of models, multiple models, purpose of the models, testing models, and changing models); each dimension had three levels (low, medium, high). Descriptive statistics were employed to compare the frequency of describing the five dimensions of the views of scientific models in the atomic unit to understand the emphasis of the views and to compare the frequency of the eight scientific models’ use to investigate the atomic model that was used most often in the textbooks. Descriptive statistics were further utilized to investigate the average levels of the five dimensions of the views of scientific models to examine whether the textbooks views were close to the scientific view. The average level of the five dimensions of the eight atomic models were also compared to examine whether the views of the eight atomic models were close to the scientific views. The results revealed the following three major findings from the atomic unit. (1) Among the five dimensions of the views of scientific models, the most portrayed dimension was the 'purpose of models,' and the least portrayed dimension was 'multiple models.' The most diverse view was the 'purpose of models,' and the most sophisticated scientific view was the 'nature of models.' The least sophisticated scientific view was 'multiple models.' (2) Among the eight atomic models, the most mentioned model was the atomic nucleus model, and the least mentioned model was the three states of matter. (3) Among the correlations between the five dimensions, the dimension of 'testing models' was highly related to the dimension of 'changing models.' In short, this study examined the views of scientific models based on the atomic units of physics textbooks to identify the emphasized and disregarded views in the textbooks. The findings suggest how future textbooks and curriculum can provide a thorough view of scientific models to enhance students' model-based learning.

Keywords: atomic models, textbooks, science education, scientific model

Procedia PDF Downloads 139
9546 Analysis of Exploitation Damages of the Frame Scaffolding

Authors: A. Robak, M. Pieńko, E. Błazik-Borowa, J. Bęc, I. Szer

Abstract:

The analyzes and classifications presented in the article were based on the research carried out in year 2016 and 2017 on a group of nearly one hundred scaffoldings assembled and used on construction sites in different parts of Poland. During scaffolding selection process efforts were made to maintain diversification in terms of parameters such as scaffolding size, investment size, type of investment, location and nature of conducted works. This resulted in the research being carried out on scaffoldings used for church renovation in a small town or attached to the facades of classic apartment blocks, as well as on scaffoldings used during construction of skyscrapers or facilities of the largest power plants. This variety allows to formulate general conclusions about the technical condition of used frame scaffoldings. Exploitation damages of the frame scaffolding elements were divided into three groups. The first group includes damages to the main structural components, which reduce the strength of the scaffolding elements and hence the whole structure. The qualitative analysis of these damages was made on the basis of numerical models that take into account the geometry of the damage and on the basis of computational nonlinear static analyzes. The second group focuses on exploitation damages such as the lack of a pin on the guardrail bolt which may cause an imminent threat to people using scaffolding. These are local damages that do not affect the bearing capacity and stability of the whole structure but are very important for safe use. The last group consider damages that reduce only aesthetic values and do not have direct impact on bearing capacity and safety of use. Apart from qualitative analyzes the article will present quantitative analyzes showing how frequently given type of damage occurs.

Keywords: scaffolding, damage, safety, numerical analysis

Procedia PDF Downloads 234
9545 Damage Assessment of Reinforced Concrete Slabs Subjected to Blast Loading

Authors: W. Badla

Abstract:

A numerical investigation has been carried out to examine the behaviour of reinforced concrete slabs to uniform blast loading. The aim of this work is to determine the effects of various parameters on the results. Finite element simulations were performed in the non linear dynamic range using an elasto-plastic damage model. The main parameters considered are: the negative phase of blast loading, time duration, equivalent weight of TNT, distance of the explosive and slab dimensions. Numerical modelling has been performed using ABAQUS/Explicit. The results obtained in terms of displacements and propagation of damage show that the above parameters influence considerably the nonlinear dynamic behaviour of reinforced concrete slabs under uniform blast loading.

Keywords: blast loading, reinforced concrete slabs, elasto-plastic damage model, negative phase, time duration, equivalent weight of TNT, explosive distance, slab dimensions

Procedia PDF Downloads 504
9544 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization

Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın

Abstract:

There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.

Keywords: aircraft, fatigue, joint, life, optimization, prediction.

Procedia PDF Downloads 147
9543 Development of Fuzzy Logic and Neuro-Fuzzy Surface Roughness Prediction Systems Coupled with Cutting Current in Milling Operation

Authors: Joseph C. Chen, Venkata Mohan Kudapa

Abstract:

Development of two real-time surface roughness (Ra) prediction systems for milling operations was attempted. The systems used not only cutting parameters, such as feed rate and spindle speed, but also the cutting current generated and corrected by a clamp type energy sensor. Two different approaches were developed. First, a fuzzy inference system (FIS), in which the fuzzy logic rules are generated by experts in the milling processes, was used to conduct prediction modeling using current cutting data. Second, a neuro-fuzzy system (ANFIS) was explored. Neuro-fuzzy systems are adaptive techniques in which data are collected on the network, processed, and rules are generated by the system. The inference system then uses these rules to predict Ra as the output. Experimental results showed that the parameters of spindle speed, feed rate, depth of cut, and input current variation could predict Ra. These two systems enable the prediction of Ra during the milling operation with an average of 91.83% and 94.48% accuracy by FIS and ANFIS systems, respectively. Statistically, the ANFIS system provided better prediction accuracy than that of the FIS system.

Keywords: surface roughness, input current, fuzzy logic, neuro-fuzzy, milling operations

Procedia PDF Downloads 121
9542 Neural Network Based Approach of Software Maintenance Prediction for Laboratory Information System

Authors: Vuk M. Popovic, Dunja D. Popovic

Abstract:

Software maintenance phase is started once a software project has been developed and delivered. After that, any modification to it corresponds to maintenance. Software maintenance involves modifications to keep a software project usable in a changed or a changing environment, to correct discovered faults, and modifications, and to improve performance or maintainability. Software maintenance and management of software maintenance are recognized as two most important and most expensive processes in a life of a software product. This research is basing the prediction of maintenance, on risks and time evaluation, and using them as data sets for working with neural networks. The aim of this paper is to provide support to project maintenance managers. They will be able to pass the issues planned for the next software-service-patch to the experts, for risk and working time evaluation, and afterward to put all data to neural networks in order to get software maintenance prediction. This process will lead to the more accurate prediction of the working hours needed for the software-service-patch, which will eventually lead to better planning of budget for the software maintenance projects.

Keywords: laboratory information system, maintenance engineering, neural networks, software maintenance, software maintenance costs

Procedia PDF Downloads 331
9541 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 113
9540 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 69
9539 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: bioassay, machine learning, preprocessing, virtual screen

Procedia PDF Downloads 256
9538 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models

Authors: Yungtai Lo

Abstract:

Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.

Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve

Procedia PDF Downloads 331
9537 A 3-Dimensional Memory-Based Model for Planning Working Postures Reaching Specific Area with Postural Constraints

Authors: Minho Lee, Donghyun Back, Jaemoon Jung, Woojin Park

Abstract:

The current 3-dimensional (3D) posture prediction models commonly provide only a few optimal postures to achieve a specific objective. The problem with such models is that they are incapable of rapidly providing several optimal posture candidates according to various situations. In order to solve this problem, this paper presents a 3D memory-based posture planning (3D MBPP) model, which is a new digital human model that can analyze the feasible postures in 3D space for reaching tasks that have postural constraints and specific reaching space. The 3D MBPP model can be applied to the types of works that are done with constrained working postures and have specific reaching space. The examples of such works include driving an excavator, driving automobiles, painting buildings, working at an office, pitching/batting, and boxing. For these types of works, a limited amount of space is required to store all of the feasible postures, as the hand reaches boundary can be determined prior to perform the task. This prevents computation time from increasing exponentially, which has been one of the major drawbacks of memory-based posture planning model in 3D space. This paper validates the utility of 3D MBPP model using a practical example of analyzing baseball batting posture. In baseball, batters swing with both feet fixed to the ground. This motion is appropriate for use with the 3D MBPP model since the player must try to hit the ball when the ball is located inside the strike zone (a limited area) in a constrained posture. The results from the analysis showed that the stored and the optimal postures vary depending on the ball’s flying path, the hitting location, the batter’s body size, and the batting objective. These results can be used to establish the optimal postural strategies for achieving the batting objective and performing effective hitting. The 3D MBPP model can also be applied to various domains to determine the optimal postural strategies and improve worker comfort.

Keywords: baseball, memory-based, posture prediction, reaching area, 3D digital human models

Procedia PDF Downloads 199
9536 The 6Rs of Radiobiology in Photodynamic Therapy: Review

Authors: Kave Moloudi, Heidi Abrahamse, Blassan P. George

Abstract:

Radiotherapy (RT) and photodynamic therapy (PDT) are both forms of cancer treatment that aim to kill cancer cells while minimizing damage to healthy tissue. The similarity between RT and PDT lies in their mechanism of action. Both treatments use energy to damage cancer cells. RT uses high-energy radiation to damage the DNA of cancer cells, while PDT uses light energy to activate a photosensitizing agent, which produces reactive oxygen species (ROS) that damage the cancer cells. Both treatments require careful planning and monitoring to ensure the correct dose is delivered to the tumor while minimizing damage to surrounding healthy tissue. They are also often used in combination with other treatments, such as surgery or chemotherapy, to improve overall outcomes. However, there are also significant differences between RT and PDT. For example, RT is a non-invasive treatment that can be delivered externally or internally, while PDT requires the injection of a photosensitizing agent and the use of a specialized light source to activate it. Additionally, the side effects and risks associated with each treatment can vary. In this review, we focus on generalizing the 6Rs of radiobiology in PDT, which can open a window for the clinical application of Radio-photodynamic therapy with minimum side effects. Furthermore, this review can open new insight to work on and design new radio-photosensitizer agents in Radio-photodynamic therapy.

Keywords: radiobiology, photodynamic therapy, radiotherapy, 6Rs in radiobiology, ROS, DNA damages, cellular and molecular mechanism, clinical application.

Procedia PDF Downloads 71
9535 Discussing Embedded versus Central Machine Learning in Wireless Sensor Networks

Authors: Anne-Lena Kampen, Øivind Kure

Abstract:

Machine learning (ML) can be implemented in Wireless Sensor Networks (WSNs) as a central solution or distributed solution where the ML is embedded in the nodes. Embedding improves privacy and may reduce prediction delay. In addition, the number of transmissions is reduced. However, quality factors such as prediction accuracy, fault detection efficiency and coordinated control of the overall system suffer. Here, we discuss and highlight the trade-offs that should be considered when choosing between embedding and centralized ML, especially for multihop networks. In addition, we present estimations that demonstrate the energy trade-offs between embedded and centralized ML. Although the total network energy consumption is lower with central prediction, it makes the network more prone for partitioning due to the high forwarding load on the one-hop nodes. Moreover, the continuous improvements in the number of operations per joule for embedded devices will move the energy balance toward embedded prediction.

Keywords: central machine learning, embedded machine learning, energy consumption, local machine learning, wireless sensor networks, WSN

Procedia PDF Downloads 127