Search results for: predictions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 599

Search results for: predictions

239 Determination of Tide Height Using Global Navigation Satellite Systems (GNSS)

Authors: Faisal Alsaaq

Abstract:

Hydrographic surveys have traditionally relied on the availability of tide information for the reduction of sounding observations to a common datum. In most cases, tide information is obtained from tide gauge observations and/or tide predictions over space and time using local, regional or global tide models. While the latter often provides a rather crude approximation, the former relies on tide gauge stations that are spatially restricted, and often have sparse and limited distribution. A more recent method that is increasingly being used is Global Navigation Satellite System (GNSS) positioning which can be utilised to monitor height variations of a vessel or buoy, thus providing information on sea level variations during the time of a hydrographic survey. However, GNSS heights obtained under the dynamic environment of a survey vessel are affected by “non-tidal” processes such as wave activity and the attitude of the vessel (roll, pitch, heave and dynamic draft). This research seeks to examine techniques that separate the tide signal from other non-tidal signals that may be contained in GNSS heights. This requires an investigation of the processes involved and their temporal, spectral and stochastic properties in order to apply suitable recovery techniques of tide information. In addition, different post-mission and near real-time GNSS positioning techniques will be investigated with focus on estimation of height at ocean. Furthermore, the study will investigate the possibility to transfer the chart datums at the location of tide gauges.

Keywords: hydrography, GNSS, datum, tide gauge

Procedia PDF Downloads 244
238 A Study on the Failure Modes of Steel Moment Frame in Post-Earthquake Fire Using Coupled Mechanical-Thermal Analysis

Authors: Ehsan Asgari, Meisam Afazeli, Nezhla Attarchian

Abstract:

Post-earthquake fire is considered as a major threat in seismic areas. After an earthquake, fire is possible in structures. In this research, the effect of post-earthquake fire on steel moment frames with and without fireproofing coating is investigated. For this purpose, finite element method is employed. For the verification of finite element results, the results of an experimental study carried out by previous researchers are used, and the predicted FE results are compared with the test results, and good agreement is observed. After ensuring the accuracy of the predictions of finite element models, the effect of post-earthquake fire on the frames is investigated taking into account the parameters including the presence or absence of fire protection, frame design assumptions, earthquake type and different fire scenario. Ordinary fire and post-earthquake fire effect on the frames is also studied. The plastic hinges induced by earthquake in the structure are determined in the beam to the column connection and in panel zone. These areas should be accurately considered when providing fireproofing coatings. The results of the study show that the occurrence of fire beside corner columns is the most damaging scenario that results in progressive collapse of structure. It was also concluded that the behavior of structure in fire after a strong ground motion is significantly different from that in a normal fire.

Keywords: post earthquake fire, moment frame, finite element simulation, coupled temperature-displacement analysis, fire scenario

Procedia PDF Downloads 129
237 Performance Gap and near Zero Energy Buildings Compliance of Monitored Passivhaus in Northern Ireland, the Republic of Ireland and Italy

Authors: S. Colclough, V. Costanzo, K. Fabbri, S. Piraccini, P. Griffiths

Abstract:

The near Zero Energy Building (nZEB) standard is required for all buildings from 2020. The Passive House (PH) standard is a well-established low-energy building standard, having been designed over 25 years ago, and could potentially be used to achieve the nZEB standard in combination with renewables. By comparing measured performance with design predictions, this paper considers if there is a performance gap for a number of monitored properties and assesses if the nZEB standard can be achieved by following the well-established PH scheme. Analysis is carried out based on monitoring results from real buildings located in Northern Ireland, the Republic of Ireland and Italy respectively, with particular focus on the indoor air quality including the assumed and measured indoor temperature and heating periods for both standards as recorded during a full annual cycle. An analysis is carried out also on the energy performance certificates of each of the dwellings to determine if they meet the near Zero Energy Buildings primary energy consumption targets set in the respective jurisdictions. Each of the dwellings is certified as complying with the passive house standard, and accordingly have very good insulation levels, heat recovery and ventilation systems of greater than 75% efficiency and an airtightness of less than 0.6 air changes per hour at 50 Pa. It is found that indoor temperature and relative humidity were within the comfort boundaries set in the design stage, while carbon dioxide concentrations are sometimes higher than the values suggested by EN 15251 Standard for comfort class I especially in bedrooms.

Keywords: monitoring campaign, nZEB (near zero energy buildings), Passivhaus, performance gap

Procedia PDF Downloads 129
236 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 205
235 Efficient Credit Card Fraud Detection Based on Multiple ML Algorithms

Authors: Neha Ahirwar

Abstract:

In the contemporary digital era, the rise of credit card fraud poses a significant threat to both financial institutions and consumers. As fraudulent activities become more sophisticated, there is an escalating demand for robust and effective fraud detection mechanisms. Advanced machine learning algorithms have become crucial tools in addressing this challenge. This paper conducts a thorough examination of the design and evaluation of a credit card fraud detection system, utilizing four prominent machine learning algorithms: random forest, logistic regression, decision tree, and XGBoost. The surge in digital transactions has opened avenues for fraudsters to exploit vulnerabilities within payment systems. Consequently, there is an urgent need for proactive and adaptable fraud detection systems. This study addresses this imperative by exploring the efficacy of machine learning algorithms in identifying fraudulent credit card transactions. The selection of random forest, logistic regression, decision tree, and XGBoost for scrutiny in this study is based on their documented effectiveness in diverse domains, particularly in credit card fraud detection. These algorithms are renowned for their capability to model intricate patterns and provide accurate predictions. Each algorithm is implemented and evaluated for its performance in a controlled environment, utilizing a diverse dataset comprising both genuine and fraudulent credit card transactions.

Keywords: efficient credit card fraud detection, random forest, logistic regression, XGBoost, decision tree

Procedia PDF Downloads 30
234 Numerical Analysis of the Aging Effects of RC Shear Walls Repaired by CFRP Sheets: Application of CEB-FIP MC 90 Model

Authors: Yeghnem Redha, Guerroudj Hicham Zakaria, Hanifi Hachemi Amar Lemiya, Meftah Sid Ahmed, Tounsi Abdelouahed, Adda Bedia El Abbas

Abstract:

Creep deformation of concrete is often responsible for excessive deflection at service loads which can compromise the performance of elements within a structure. Although laboratory test may be undertaken to determine the deformation properties of concrete, these are time-consuming, often expensive and generally not a practical option. Therefore, relatively simple empirically design code models are relied to predict the creep strain. This paper reviews the accuracy of creep and shrinkage predictions of reinforced concrete (RC) shear walls structures strengthened with carbon fibre reinforced polymer (CFRP) sheets, which is characterized by a widthwise varying fibre volume fraction. This review is yielded by CEB-FIB MC90 model. The time-dependent behavior was investigated to analyze their static behavior. In the numerical formulation, the adherents and the adhesives are all modelled as shear wall elements, using the mixed finite element method. Several tests were used to dem¬onstrate the accuracy and effectiveness of the proposed method. Numerical results from the present analysis are presented to illustrate the significance of the time-dependency of the lateral displacements.

Keywords: RC shear walls strengthened, CFRP sheets, creep and shrinkage, CEB-FIP MC90 model, finite element method, static behavior

Procedia PDF Downloads 277
233 Rheology and Structural Arrest of Dense Dairy Suspensions: A Soft Matter Approach

Authors: Marjan Javanmard

Abstract:

The rheological properties of dairy products critically depend on the underlying organisation of proteins at multiple length scales. When heated and acidified, milk proteins form particle gel that is viscoelastic, solvent rich, ‘soft’ material. In this work recent developments on the rheology of soft particles suspensions were used to interpret and potentially define the properties of dairy gel structures. It is discovered that at volume fractions below random close packing (RCP), the Maron-Pierce-Quemada (MPQ) model accurately predicts the viscosity of the dairy gel suspensions without fitting parameters; the MPQ model has been shown previously to provide reasonable predictions of the viscosity of hard sphere suspensions from the volume fraction, solvent viscosity and RCP. This surprising finding demonstrates that up to RCP, the dairy gel system behaves as a hard sphere suspension and that the structural aggregates behave as discrete particulates akin to what is observed for microgel suspensions. At effective phase volumes well above RCP, the system is a soft solid. In this region, it is discovered that the storage modulus of the sheared AMG scales with the storage modulus of the set gel. The storage modulus in this regime is reasonably well described as a function of effective phase volume by the Evans and Lips model. Findings of this work has potential to aid in rational design and control of dairy food structure-properties.

Keywords: dairy suspensions, rheology-structure, Maron-Pierce-Quemada Model, Evans and Lips Model

Procedia PDF Downloads 192
232 An Inverse Docking Approach for Identifying New Potential Anticancer Targets

Authors: Soujanya Pasumarthi

Abstract:

Inverse docking is a relatively new technique that has been used to identify potential receptor targets of small molecules. Our docking software package MDock is well suited for such an application as it is both computationally efficient, yet simultaneously shows adequate results in binding affinity predictions and enrichment tests. As a validation study, we present the first stage results of an inverse-docking study which seeks to identify potential direct targets of PRIMA-1. PRIMA-1 is well known for its ability to restore mutant p53's tumor suppressor function, leading to apoptosis in several types of cancer cells. For this reason, we believe that potential direct targets of PRIMA-1 identified in silico should be experimentally screened for their ability to inhibitcancer cell growth. The highest-ranked human protein of our PRIMA-1 docking results is oxidosqualene cyclase (OSC), which is part of the cholesterol synthetic pathway. The results of two followup experiments which treat OSC as a possible anti-cancer target are promising. We show that both PRIMA-1 and Ro 48-8071, a known potent OSC inhibitor, significantly reduce theviability of BT-474 breast cancer cells relative to normal mammary cells. In addition, like PRIMA-1, we find that Ro 48-8071 results in increased binding of mutant p53 to DNA in BT- 474cells (which highly express p53). For the first time, Ro 48-8071 is shown as a potent agent in killing human breast cancer cells. The potential of OSC as a new target for developing anticancer therapies is worth further investigation.

Keywords: inverse docking, in silico screening, protein-ligand interactions, molecular docking

Procedia PDF Downloads 414
231 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite

Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy

Abstract:

This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite

Procedia PDF Downloads 122
230 2D Numerical Modeling of Ultrasonic Measurements in Concrete: Wave Propagation in a Multiple-Scattering Medium

Authors: T. Yu, L. Audibert, J. F. Chaix, D. Komatitsch, V. Garnier, J. M. Henault

Abstract:

Linear Ultrasonic Techniques play a major role in Non-Destructive Evaluation (NDE) for civil engineering structures in concrete since they can meet operational requirements. Interpretation of ultrasonic measurements could be improved by a better understanding of ultrasonic wave propagation in a multiple scattering medium. This work aims to develop a 2D numerical model of ultrasonic wave propagation in a heterogeneous medium, like concrete, integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering is obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. First, this model is applied to one scattering element (a cylinder) in a homogenous medium in a linear-elastic system, and its validation is completed thanks to the comparison with analytical solution. Then, some cases of multiple scattering by a set of randomly located cylinders or polygons are simulated to perform parametric studies on the influence of frequency and scatterer size, concentration, and shape. Also, the effective properties are compared with the predictions of Waterman-Truell model to verify its validity. Finally, the mortar viscoelastic behavior is introduced in the simulation in order to considerer the dispersion and the attenuation due to porosity included in the cement paste. In the future, different steps will be developed: The comparisons with experimental results, the interpretation of NDE measurements, and the optimization of NDE parameters before an auscultation.

Keywords: attenuation, multiple-scattering medium, numerical modeling, phase velocity, ultrasonic measurements

Procedia PDF Downloads 234
229 Breaking the Stained-Glass Ceiling: Personality Traits and Ambivalent Sexism in Shaping Gender Income Equality

Authors: Shiza Shahid, Saba Shahid, Kenji Noguchi, Raegan Bishop, Elena Stepanova

Abstract:

According to data from the U.S. Census Bureau, in 2020, in the United States, women who worked full-time earned only 82 cents for every dollar earned by men who worked full-time, year-round. This study examined how personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) interacts with ambivalent sexism to influence acceptance of gender income inequality. Using a quantitative method approach, this study collected data from a sample of N=150 students from Social Science Online Subject Pool (SONA). The study predicted that (a) extraversion and openness to experience would be positively related to acceptance of gender income inequality, while emotional stability and agreeableness would be negatively related to acceptance of gender income inequality, (b) Individuals who scored higher on measures of hostile sexism would show greater acceptance of gender income inequality than individuals who score higher on measures of benevolent sexism. The results were reported according to the predictions for the study. This study broadens the importance of addressing the underlying factors contributing to attitudes towards gender income inequality and contributes to ongoing efforts to achieve gender equality, which is important for promoting economic well-being.

Keywords: gender income ineqaulity, ambivalent sexism, personality traits, sustainable development goals

Procedia PDF Downloads 32
228 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 274
227 Comparative and Combined Toxicity of NiO and Mn₃O₄ Nanoparticles as Assessed in vitro and in vivo

Authors: Ilzira A. Minigalieva, Tatiana V. Bushueva, Eleonore Frohlich, Vladimir Panov, Ekaterina Shishkina, Boris A. Katsnelson

Abstract:

Background: The overwhelming majority of the experimental studies in the field of metal nanotoxicology have been performed on cultures of established cell lines, with very few researchers focusing on animal experiments, while a juxtaposition of conclusions inferred from these two types of research is blatantly lacking. The least studied aspect of this problem relates to characterizing and predicting the combined toxicity of metallic nanoparticles. Methods: Comparative and combined toxic effects of purposefully prepared spherical NiO and Mn₃O₄ nanoparticles (mean diameters 16.7 ± 8.2 nm and 18.4 ± 5.4 nm respectively) were estimated on cultures of human cell lines: MRC-5 fibroblasts, THP-1 monocytes, SY-SY5Y neuroblastoma cells, as well as on the latter two lines differentiated to macrophages and neurons, respectively. The combined cytotoxicity was mathematically modeled using the response surface methodology. Results: The comparative assessment of the studied NPs unspecific toxicity previously obtained in vivo was satisfactorily reproduced by the present in vitro tests. However, with respect to manganese-specific brain damage which had been demonstrated by us in animal experiment with the same NPs, the testing on neuronall cell culture showed only a certain enhancing effect of Mn₃O₄-NPs on the toxic action of NiO-NPs, while the role of the latter prevailed. Conclusion: From the point of view of the preventive toxicology, the experimental modeling of metallic NPs combined toxicity on cell cultures can give non-reliable predictions of the in vivo action’s effects.

Keywords: manganese oxide, nickel oxide, nanoparticles, in vitro toxicity

Procedia PDF Downloads 270
226 Use of Front-Face Fluorescence Spectroscopy and Multiway Analysis for the Prediction of Olive Oil Quality Features

Authors: Omar Dib, Rita Yaacoub, Luc Eveleigh, Nathalie Locquet, Hussein Dib, Ali Bassal, Christophe B. Y. Cordella

Abstract:

The potential of front-face fluorescence coupled with chemometric techniques, namely parallel factor analysis (PARAFAC) and multiple linear regression (MLR) as a rapid analysis tool to characterize Lebanese virgin olive oils was investigated. Fluorescence fingerprints were acquired directly on 102 Lebanese virgin olive oil samples in the range of 280-540 nm in excitation and 280-700 nm in emission. A PARAFAC model with seven components was considered optimal with a residual of 99.64% and core consistency value of 78.65. The model revealed seven main fluorescence profiles in olive oil and was mainly associated with tocopherols, polyphenols, chlorophyllic compounds and oxidation/hydrolysis products. 23 MLR regression models based on PARAFAC scores were generated, the majority of which showed a good correlation coefficient (R > 0.7 for 12 predicted variables), thus satisfactory prediction performances. Acid values, peroxide values, and Delta K had the models with the highest predictions, with R values of 0.89, 0.84 and 0.81 respectively. Among fatty acids, linoleic and oleic acids were also highly predicted with R values of 0.8 and 0.76, respectively. Factors contributing to the model's construction were related to common fluorophores found in olive oil, mainly chlorophyll, polyphenols, and oxidation products. This study demonstrates the interest of front-face fluorescence as a promising tool for quality control of Lebanese virgin olive oils.

Keywords: front-face fluorescence, Lebanese virgin olive oils, multiple Linear regressions, PARAFAC analysis

Procedia PDF Downloads 429
225 A Prediction Model Using the Price Cyclicality Function Optimized for Algorithmic Trading in Financial Market

Authors: Cristian Păuna

Abstract:

After the widespread release of electronic trading, automated trading systems have become a significant part of the business intelligence system of any modern financial investment company. An important part of the trades is made completely automatically today by computers using mathematical algorithms. The trading decisions are taken almost instantly by logical models and the orders are sent by low-latency automatic systems. This paper will present a real-time price prediction methodology designed especially for algorithmic trading. Based on the price cyclicality function, the methodology revealed will generate price cyclicality bands to predict the optimal levels for the entries and exits. In order to automate the trading decisions, the cyclicality bands will generate automated trading signals. We have found that the model can be used with good results to predict the changes in market behavior. Using these predictions, the model can automatically adapt the trading signals in real-time to maximize the trading results. The paper will reveal the methodology to optimize and implement this model in automated trading systems. After tests, it is proved that this methodology can be applied with good efficiency in different timeframes. Real trading results will be also displayed and analyzed in order to qualify the methodology and to compare it with other models. As a conclusion, it was found that the price prediction model using the price cyclicality function is a reliable trading methodology for algorithmic trading in the financial market.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, price prediction

Procedia PDF Downloads 156
224 Effect of Mach Number for Gust-Airfoil Interatcion Noise

Authors: ShuJiang Jiang

Abstract:

The interaction of turbulence with airfoil is an important noise source in many engineering fields, including helicopters, turbofan, and contra-rotating open rotor engines, where turbulence generated in the wake of upstream blades interacts with the leading edge of downstream blades and produces aerodynamic noise. One approach to study turbulence-airfoil interaction noise is to model the oncoming turbulence as harmonic gusts. A compact noise source produces a dipole-like sound directivity pattern. However, when the acoustic wavelength is much smaller than the airfoil chord length, the airfoil needs to be treated as a non-compact source, and the gust-airfoil interaction becomes more complicated and results in multiple lobes generated in the radiated sound directivity. Capturing the short acoustic wavelength is a challenge for numerical simulations. In this work, simulations are performed for gust-airfoil interaction at different Mach numbers, using a high-fidelity direct Computational AeroAcoustic (CAA) approach based on a spectral/hp element method, verified by a CAA benchmark case. It is found that the squared sound pressure varies approximately as the 5th power of Mach number, which changes slightly with the observer location. This scaling law can give a better sound prediction than the flat-plate theory for thicker airfoils. Besides, another prediction method, based on the flat-plate theory and CAA simulation, has been proposed to give better predictions than the scaling law for thicker airfoils.

Keywords: aeroacoustics, gust-airfoil interaction, CFD, CAA

Procedia PDF Downloads 52
223 ISMARA: Completely Automated Inference of Gene Regulatory Networks from High-Throughput Data

Authors: Piotr J. Balwierz, Mikhail Pachkov, Phil Arnold, Andreas J. Gruber, Mihaela Zavolan, Erik van Nimwegen

Abstract:

Understanding the key players and interactions in the regulatory networks that control gene expression and chromatin state across different cell types and tissues in metazoans remains one of the central challenges in systems biology. Our laboratory has pioneered a number of methods for automatically inferring core gene regulatory networks directly from high-throughput data by modeling gene expression (RNA-seq) and chromatin state (ChIP-seq) measurements in terms of genome-wide computational predictions of regulatory sites for hundreds of transcription factors and micro-RNAs. These methods have now been completely automated in an integrated webserver called ISMARA that allows researchers to analyze their own data by simply uploading RNA-seq or ChIP-seq data sets and provides results in an integrated web interface as well as in downloadable flat form. For any data set, ISMARA infers the key regulators in the system, their activities across the input samples, the genes and pathways they target, and the core interactions between the regulators. We believe that by empowering experimental researchers to apply cutting-edge computational systems biology tools to their data in a completely automated manner, ISMARA can play an important role in developing our understanding of regulatory networks across metazoans.

Keywords: gene expression analysis, high-throughput sequencing analysis, transcription factor activity, transcription regulation

Procedia PDF Downloads 34
222 Substantial Fatigue Similarity of a New Small-Scale Test Rig to Actual Wheel-Rail System

Authors: Meysam Naeimi, Zili Li, Roumen Petrov, Rolf Dollevoet, Jilt Sietsma, Jun Wu

Abstract:

The substantial similarity of fatigue mechanism in a new test rig for rolling contact fatigue (RCF) has been investigated. A new reduced-scale test rig is designed to perform controlled RCF tests in wheel-rail materials. The fatigue mechanism of the rig is evaluated in this study using a combined finite element-fatigue prediction approach. The influences of loading conditions on fatigue crack initiation have been studied. Furthermore, the effects of some artificial defects (squat-shape) on fatigue lives are examined. To simulate the vehicle-track interaction by means of the test rig, a three-dimensional finite element (FE) model is built up. The nonlinear material behaviour of the rail steel is modelled in the contact interface. The results of FE simulations are combined with the critical plane concept to determine the material points with the greatest possibility of fatigue failure. Based on the stress-strain responses, by employing of previously postulated criteria for fatigue crack initiation (plastic shakedown and ratchetting), fatigue life analysis is carried out. The results are reported for various loading conditions and different defect sizes. Afterward, the cyclic mechanism of the test rig is evaluated from the operational viewpoint. The results of fatigue life predictions are compared with the expected number of cycles of the test rig by its cyclic nature. Finally, the estimative duration of the experiments until fatigue crack initiation is roughly determined.

Keywords: fatigue, test rig, crack initiation, life, rail, squats

Procedia PDF Downloads 491
221 A Grey-Box Text Attack Framework Using Explainable AI

Authors: Esther Chiramal, Kelvin Soh Boon Kai

Abstract:

Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.

Keywords: BERT, explainable AI, Grey-box text attack, transformer

Procedia PDF Downloads 114
220 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation

Procedia PDF Downloads 106
219 The Friction Of Oil Contaminated Granular Soils; Experimental Study

Authors: Miron A, Tadmor R, Pinkert S

Abstract:

Soil contamination is a pressing environmental concern, drawing considerable focus due to its adverse ecological and health outcomes, and the frequent occurrence of contamination incidents in recent years. The interaction between the oil pollutant and the host soil can alter the mechanical properties of the soil in a manner that can crucially affect engineering challenges associated with the stability of soil systems. The geotechnical investigation of contaminated soils has gained momentum since the Gulf War in the 1990s, when a massive amount of oil was spilled into the ocean. Over recent years, various types of soil contaminations have been studied to understand the impact of pollution type, uncovering the mechanical complexity that arises not just from the pollutant type but also from the properties of the host soil and the interplay between them. This complexity is associated with diametrically opposite effects in different soil types. For instance, while certain oils may enhance the frictional properties of cohesive soils, they can reduce the friction in granular soils. This striking difference can be attributed to the different mechanisms at play: physico-chemical interactions predominate in the former case, whereas lubrication effects are more significant in the latter. this study introduces an empirical law designed to quantify the mechanical effect of oil contamination in granular soils, factoring the properties of both the contaminating oil and the host soil. This law is achieved by comprehensive experimental research that spans a wide array of oil types and soils with unique configurations and morphologies. By integrating these diverse data points, our law facilitates accurate predictions of how oil contamination modifies the frictional characteristics of general granular soils.

Keywords: contaminated soils, lubrication, friction, granular media

Procedia PDF Downloads 28
218 Springback Prediction for Sheet Metal Cold Stamping Using Convolutional Neural Networks

Authors: Lei Zhu, Nan Li

Abstract:

Cold stamping has been widely applied in the automotive industry for the mass production of a great range of automotive panels. Predicting the springback to ensure the dimensional accuracy of the cold-stamped components is a critical step. The main approaches for the prediction and compensation of springback in cold stamping include running Finite Element (FE) simulations and conducting experiments, which require forming process expertise and can be time-consuming and expensive for the design of cold stamping tools. Machine learning technologies have been proven and successfully applied in learning complex system behaviours using presentative samples. These technologies exhibit the promising potential to be used as supporting design tools for metal forming technologies. This study, for the first time, presents a novel application of a Convolutional Neural Network (CNN) based surrogate model to predict the springback fields for variable U-shape cold bending geometries. A dataset is created based on the U-shape cold bending geometries and the corresponding FE simulations results. The dataset is then applied to train the CNN surrogate model. The result shows that the surrogate model can achieve near indistinguishable full-field predictions in real-time when compared with the FE simulation results. The application of CNN in efficient springback prediction can be adopted in industrial settings to aid both conceptual and final component designs for designers without having manufacturing knowledge.

Keywords: springback, cold stamping, convolutional neural networks, machine learning

Procedia PDF Downloads 124
217 Conceptual Solution and Thermal Analysis of the Final Cooling Process of Biscuits in One Confectionary Factory in Serbia

Authors: Duško Salemović, Aleksandar Dedić, Matilda Lazić, Dragan Halas

Abstract:

The paper presents the conceptual solution for the final cooling of the chocolate dressing of biscuits in one confectionary factory in Serbia. The proposed concept solution was derived from the desired technological process of final cooling of biscuits and the required process parameters that were to be achieved, and which were an integral part of the project task. The desired process parameters for achieving proper hardening and coating formation are the exchanged amount of heat in the time unit between the two media (air and chocolate dressing), the speed of air inside the tunnel cooler, and the surface of all biscuits in contact with the air. These parameters were calculated in the paper. The final cooling of chocolate dressing on biscuits could be optimized by changing process parameters and dimensions of the tunnel cooler and looking for the appropriate values for them. The accurate temperature predictions and fluid flow analysis could be conducted by using heat balance and flow balance equations, having in mind the theory of similarity. Furthermore, some parameters were adopted from previous technology processes, such as the inlet temperature of biscuits and input air temperature. A thermal calculation was carried out, and it was demonstrated that the percentage error between the contact surface of the air and the chocolate biscuit topping, which is obtained from the heat balance and geometrically through the proposed conceptual solution, does not exceed 0.67%, which is a very good agreement. This enabled the quality of the cooling process of chocolate dressing applied on the biscuit and the hardness of its coating.

Keywords: chocolate dressing, air, cooling, heat balance

Procedia PDF Downloads 48
216 An Inspection of Two Layer Model of Agency: An fMRI Study

Authors: Keyvan Kashkouli Nejad, Motoaki Sugiura, Atsushi Sato, Takayuki Nozawa, Hyeonjeong Jeong, Sugiko Hanawa , Yuka Kotozaki, Ryuta Kawashima

Abstract:

The perception of agency/control is altered with presence of discrepancies in the environment or mismatch of predictions (of possible results) and actual results the sense of agency might become altered. Synofzik et al. proposed a two layer model of agency: In the first layer, the Feeling of Agency (FoA) is not directly available to awareness; a slight mismatch in the environment/outcome might cause alterations in FoA, while the agent still feels in control. If the discrepancy passes a threshold, it becomes available to consciousness and alters Judgment of Agency (JoA), which is directly available in the person’s awareness. Most experiments so far only investigate subjects rather conscious JoA, while FoA has been neglected. In this experiment we target FoA by using subliminal discrepancies that can not be consciously detectable by the subjects. Here, we explore whether we can detect this two level model in the subjects behavior and then try to map this in their brain activity. To do this, in a fMRI study, we incorporated both consciously detectable mismatching between action and result and also subliminal discrepancies in the environment. Also, unlike previous experiments where subjective questions from the participants mainly trigger the rather conscious JoA, we also tried to measure the rather implicit FoA by asking participants to rate their performance. We compared behavioral results and also brain activation when there were conscious discrepancies and when there were subliminal discrepancies against trials with no discrepancies and against each other. In line with our expectations, conditions with consciously detectable incongruencies triggered lower JoA ratings than conditions without. Also, conditions with any type of discrepancies had lower FoA ratings compared to conditions without. Additionally, we found out that TPJ and angular gyrus in particular to have a role in coding of JoA and also FoA.

Keywords: agency, fMRI, TPJ, two layer model

Procedia PDF Downloads 446
215 Modelling Conceptual Quantities Using Support Vector Machines

Authors: Ka C. Lam, Oluwafunmibi S. Idowu

Abstract:

Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.

Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression

Procedia PDF Downloads 190
214 Corporate Voluntary Greenhouse Gas Emission Reporting in United Kingdom: Insights from Institutional and Upper Echelons Theories

Authors: Lyton Chithambo

Abstract:

This paper reports the results of an investigation into the extent to which various stakeholder pressures influence voluntary disclosure of greenhouse-gas (GHG) emissions in the United Kingdom (UK). The study, which is grounded on institutional theory, also borrows from the insights of upper echelons theory and examines whether specific managerial (chief executive officer) characteristics explain and moderates various stakeholder pressures in explaining GHG voluntary disclosure. Data were obtained from the 2011 annual and sustainability reports of a sample of 216 UK companies on the FTSE350 index listed on the London Stock Exchange. Generally the results suggest that there is no substantial shareholder and employee pressure on a firm to disclose GHG information but there is significant positive pressure from the market status of a firm with those firms with more market share disclosing more GHG information. Consistent with the predictions of institutional theory, we found evidence that coercive pressure i.e. regulatory pressure and mimetic pressures emanating in some industries notably industrials and consumer services have a significant positive influence on firms’ GHG disclosure decisions. Besides, creditor pressure also had a significant negative relationship with GHG disclosure. While CEO age had a direct negative effect on GHG voluntary disclosure, its moderation effect on stakeholder pressure influence on GHG disclosure was only significant on regulatory pressure. The results have important implications for both policy makers and company boards strategizing to reign in their GHG emissions.

Keywords: greenhouse gases, voluntary disclosure, upper echelons theory, institution theory

Procedia PDF Downloads 210
213 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution

Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras

Abstract:

Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.

Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions

Procedia PDF Downloads 381
212 Sorption Properties of Biological Waste for Lead Ions from Aqueous Solutions

Authors: Lucia Rozumová, Ivo Šafařík, Jana Seidlerová, Pavel Kůs

Abstract:

Biosorption by biological waste materials from agriculture industry could be a cost-effective technique for removing metal ions from wastewater. The performance of new biosorbent systems, consisting of the waste matrixes which were magnetically modified by iron oxide nanoparticles, for the removal of lead ions from an aqueous solution was tested. The use of low-cost and eco-friendly adsorbents has been investigated as an ideal alternative to the current expensive methods. This article deals with the removal of metal ions from aqueous solutions by modified waste products - orange peels, sawdust, peanuts husks, used tea leaves and ground coffee sediment. Magnetically modified waste materials were suspended in methanol and then was added ferrofluid (magnetic iron oxide nanoparticles). This modification process gives the predictions for the formation of the smart materials with new properties. Prepared material was characterized by using scanning electron microscopy, specific surface area and pore size analyzer. Studies were focused on the sorption and desorption properties. The changes of iron content in magnetically modified materials after treatment were observed as well. Adsorption process has been modelled by adsorption isotherms. The results show that magnetically modified materials during the dynamic sorption and desorption are stable at the high adsorbed amount of lead ions. The results of this study indicate that the biological waste materials as sorbent with new properties are highly effective for the treatment of wastewater.

Keywords: biological waste, sorption, metal ions, ferrofluid

Procedia PDF Downloads 112
211 Study of Aging Behavior of Parallel-Series Connection Batteries

Authors: David Chao, John Lai, Alvin Wu, Carl Wang

Abstract:

For lithium-ion batteries with multiple cell configurations, some use scenarios can cause uneven aging effects to each cell within the battery because of uneven current distribution. Hence the focus of the study is to explore the aging effect(s) on batteries with different construction designs. In order to systematically study the influence of various factors in some key battery configurations, a detailed analysis of three key battery construction factors is conducted. And those key factors are (1) terminal position; (2) cell alignment matrix; and (3) interconnect resistance between cells. In this study, the 2S2P circuitry has been set as a model multi-cell battery to set up different battery samples, and the aging behavior is studied by a cycling test to analyze the current distribution and recoverable capacity. According to the outcome of aging tests, some key findings are: (I) different cells alignment matrices can have an impact on the cycle life of the battery; (II) symmetrical structure has been identified as a critical factor that can influence the battery cycle life, and unbalanced resistance can lead to inconsistent cell aging status; (III) the terminal position has been found to contribute to the uneven current distribution, that can cause an accelerated battery aging effect; and (IV) the internal connection resistance increase can actually result in cycle life increase; however, it is noteworthy that such increase in cycle life is accompanied by a decline in battery performance. In summary, the key findings from the study can help to identify the key aging factor of multi-cell batteries, and it can be useful to effectively improve the accuracy of battery capacity predictions.

Keywords: multiple cells battery, current distribution, battery aging, cell connection

Procedia PDF Downloads 50
210 Household Wealth and Portfolio Choice When Tail Events Are Salient

Authors: Carlson Murray, Ali Lazrak

Abstract:

Robust experimental evidence of systematic violations of expected utility (EU) establishes that individuals facing risk overweight utility from low probability gains and losses when making choices. These findings motivated development of models of preferences with probability weighting functions, such as rank dependent utility (RDU). We solve for the optimal investing strategy of an RDU investor in a dynamic binomial setting from which we derive implications for investing behavior. We show that relative to EU investors with constant relative risk aversion, commonly measured probability weighting functions produce optimal RDU terminal wealth with significant downside protection and upside exposure. We additionally find that in contrast to EU investors, RDU investors optimally choose a portfolio that contains fair bets that provide payo↵s that can be interpreted as lottery outcomes or exposure to idiosyncratic returns. In a calibrated version of the model, we calculate that RDU investors would be willing to pay 5% of their initial wealth for the freedom to trade away from an optimal EU wealth allocation. The dynamic trading strategy that supports the optimal wealth allocation implies portfolio weights that are independent of initial wealth but requires higher risky share after good stock return histories. Optimal trading also implies the possibility of non-participation when historical returns are poor. Our model fills a gap in the literature by providing new quantitative and qualitative predictions that can be tested experimentally or using data on household wealth and portfolio choice.

Keywords: behavioral finance, probability weighting, portfolio choice

Procedia PDF Downloads 401