Search results for: damage prediction models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9966

Search results for: damage prediction models

9666 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies

Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong

Abstract:

To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.

Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation

Procedia PDF Downloads 106
9665 An ANN Approach for Detection and Localization of Fatigue Damage in Aircraft Structures

Authors: Reza Rezaeipour Honarmandzad

Abstract:

In this paper we propose an ANN for detection and localization of fatigue damage in aircraft structures. We used network of piezoelectric transducers for Lamb-wave measurements in order to calculate damage indices. Data gathered by the sensors was given to neural network classifier. A set of neural network electors of different architecture cooperates to achieve consensus concerning the state of each monitored path. Sensed signal variations in the ROI, detected by the networks at each path, were used to assess the state of the structure as well as to localize detected damage and to filter out ambient changes. The classifier has been extensively tested on large data sets acquired in the tests of specimens with artificially introduced notches as well as the results of numerous fatigue experiments. Effect of the classifier structure and test data used for training on the results was evaluated.

Keywords: ANN, fatigue damage, aircraft structures, piezoelectric transducers, lamb-wave measurements

Procedia PDF Downloads 386
9664 An Integrated Label Propagation Network for Structural Condition Assessment

Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong

Abstract:

Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.

Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation

Procedia PDF Downloads 73
9663 Surface Roughness Analysis, Modelling and Prediction in Fused Deposition Modelling Additive Manufacturing Technology

Authors: Yusuf S. Dambatta, Ahmed A. D. Sarhan

Abstract:

Fused deposition modelling (FDM) is one of the most prominent rapid prototyping (RP) technologies which is being used to efficiently fabricate CAD 3D geometric models. However, the process is coupled with many drawbacks, of which the surface quality of the manufactured RP parts is among. Hence, studies relating to improving the surface roughness have been a key issue in the field of RP research. In this work, a technique of modelling the surface roughness in FDM is presented. Using experimentally measured surface roughness response of the FDM parts, an ANFIS prediction model was developed to obtain the surface roughness in the FDM parts using the main critical process parameters that affects the surface quality. The ANFIS model was validated and compared with experimental test results.

Keywords: surface roughness, fused deposition modelling (FDM), adaptive neuro fuzzy inference system (ANFIS), orientation

Procedia PDF Downloads 417
9662 A FE-Based Scheme for Computing Wave Interaction with Nonlinear Damage and Generation of Harmonics in Layered Composite Structures

Authors: R. K. Apalowo, D. Chronopoulos

Abstract:

A Finite Element (FE) based scheme is presented for quantifying guided wave interaction with Localised Nonlinear Structural Damage (LNSD) within structures of arbitrary layering and geometric complexity. The through-thickness mode-shape of the structure is obtained through a wave and finite element method. This is applied in a time domain FE simulation in order to generate time harmonic excitation for a specific wave mode. Interaction of the wave with LNSD within the system is computed through an element activation and deactivation iteration. The scheme is validated against experimental measurements and a WFE-FE methodology for calculating wave interaction with damage. Case studies for guided wave interaction with crack and delamination are presented to verify the robustness of the proposed method in classifying and identifying damage.

Keywords: layered structures, nonlinear ultrasound, wave interaction with nonlinear damage, wave finite element, finite element

Procedia PDF Downloads 124
9661 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 134
9660 Virtual Reality Based 3D Video Games and Speech-Lip Synchronization Superseding Algebraic Code Excited Linear Prediction

Authors: P. S. Jagadeesh Kumar, S. Meenakshi Sundaram, Wenli Hu, Yang Yung

Abstract:

In 3D video games, the dominance of production is unceasingly growing with a protruding level of affordability in terms of budget. Afterward, the automation of speech-lip synchronization technique is customarily onerous and has advanced a critical research subject in virtual reality based 3D video games. This paper presents one of these automatic tools, precisely riveted on the synchronization of the speech and the lip movement of the game characters. A robust and precise speech recognition segment that systematized with Algebraic Code Excited Linear Prediction method is developed which unconventionally delivers lip sync results. The Algebraic Code Excited Linear Prediction algorithm is constructed on that used in code-excited linear prediction, but Algebraic Code Excited Linear Prediction codebooks have an explicit algebraic structure levied upon them. This affords a quicker substitute to the software enactments of lip sync algorithms and thus advances the superiority of service factors abridged production cost.

Keywords: algebraic code excited linear prediction, speech-lip synchronization, video games, virtual reality

Procedia PDF Downloads 440
9659 An Analytical Survey of Construction Changes: Gaps and Opportunities

Authors: Ehsan Eshtehardian, Saeed Khodaverdi

Abstract:

This paper surveys the studies on construction change and reveals some of the potential future works. A full-scale investigation of change literature, including change definitions, types, causes and effects, and change management systems, is accomplished to explore some of the coming change trends. It is tried to pick up the critical works in each section to deduct a true timeline of construction changes. The findings show that leaping from best practice guides in late 1990s and generic process models in the early 2000s to very advanced modeling environments in the mid-2000s and the early 2010s have made gaps along with opportunities for change researchers in order to develop some more easy and applicable models. Another finding is that there is a compelling similarity between the change and risk prediction models. Therefore, integrating these two concepts, specifically from proactive management point of view, may lead to a synergy and help project teams avoid rework. Also, the findings show that exploitation of cause-effect relationship models, in order to facilitate the dispute resolutions, seems to be an interesting field for future works.

Keywords: construction change, change management systems, dispute resolutions, change literature

Procedia PDF Downloads 271
9658 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 103
9657 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis

Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior

Abstract:

Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyse several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.

Keywords: drying, models, jackfruit, biotechnology

Procedia PDF Downloads 349
9656 Thermomechanical Damage Modeling of F114 Carbon Steel

Authors: A. El Amri, M. El Yakhloufi Haddou, A. Khamlichi

Abstract:

The numerical simulation based on the Finite Element Method (FEM) is widely used in academic institutes and in the industry. It is a useful tool to predict many phenomena present in the classical manufacturing forming processes such as fracture. But, the results of such numerical model depend strongly on the parameters of the constitutive behavior model. The influences of thermal and mechanical loads cause damage. The temperature and strain rate dependent materials’ properties and their modelling are discussed. A Johnson-Cook Model of damage has been selected for the numerical simulations. Virtual software called the ABAQUS 6.11 is used for finite element analysis. This model was introduced in order to give information concerning crack initiation during thermal and mechanical loads.

Keywords: thermo-mechanical fatigue, failure, numerical simulation, fracture, damage

Procedia PDF Downloads 366
9655 Modeling and Simulation Methods Using MATLAB/Simulink

Authors: Jamuna Konda, Umamaheswara Reddy Karumuri, Sriramya Muthugi, Varun Pishati, Ravi Shakya,

Abstract:

This paper investigates the challenges involved in mathematical modeling of plant simulation models ensuring the performance of the plant models much closer to the real time physical model. The paper includes the analysis performed and investigation on different methods of modeling, design and development for plant model. Issues which impact the design time, model accuracy as real time model, tool dependence are analyzed. The real time hardware plant would be a combination of multiple physical models. It is more challenging to test the complete system with all possible test scenarios. There are possibilities of failure or damage of the system due to any unwanted test execution on real time.

Keywords: model based design (MBD), MATLAB, Simulink, stateflow, plant model, real time model, real-time workshop (RTW), target language compiler (TLC)

Procedia PDF Downloads 319
9654 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 423
9653 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.

Keywords: chimney, deterministic model, van der pol, vortex-induced vibration

Procedia PDF Downloads 191
9652 In vitro Cytotoxic and Genotoxic Effects of Arsenic Trioxide on Human Keratinocytes

Authors: H. Bouaziz, M. Sefi, J. de Lapuente, M. Borras, N. Zeghal

Abstract:

Although arsenic trioxide has been the subject of toxicological research, in vitro cytotoxicity and genotoxicity studies using relevant cell models and uniform methodology are not well elucidated. Hence, the aim of the present study was to evaluate the cytotoxicity and genotoxicity induced by arsenic trioxide in human keratinocytes (HaCaT) using the MTT [3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] and alkaline single cell gel electrophoresis (Comet) assays, respectively. Human keratinocytes were treated with different doses of arsenic trioxide for 4 h prior to cytogenetic assessment. Data obtained from the MTT assay indicated that arsenic trioxide significantly reduced the viability of HaCaT cells in a dose-dependent manner, showing a IC50 value of 34.18 ± 0.6 µM. Data generated from the comet assay also indicated a significant dose-dependent increase in DNA damage in HaCaT cells associated with arsenic trioxide exposure. We observed a significant increase in comet tail length and tail moment, showing an evidence of arsenic trioxide -induced genotoxic damage in HaCaT cells. This study confirms that the comet assay is a sensitive and effective method to detect DNA damage caused by arsenic.

Keywords: arsenic trioxide, cytotoxixity, genotoxicity, HaCaT

Procedia PDF Downloads 223
9651 Predicting Options Prices Using Machine Learning

Authors: Krishang Surapaneni

Abstract:

The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%

Keywords: finance, linear regression model, machine learning model, neural network, stock price

Procedia PDF Downloads 54
9650 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia PDF Downloads 108
9649 Prediction of CO2 Concentration in the Korea Train Express (KTX) Cabins

Authors: Yong-Il Lee, Do-Yeon Hwang, Won-Seog Jeong, Duckshin Park

Abstract:

Recently, because of the high-speed trains forced ventilation, it is important to control the ventilation. The ventilation is for controlling various contaminants, temperature, and humidity. The high-speed train route is straight to a destination having a high speed. And there are many mountainous areas in Korea. So, tunnel rate is higher then other country. KTX HVAC block off the outdoor air, when entering tunnel. So the high tunnel rate is an effect of ventilation in the KTX cabin. It is important to reduction rate in CO2 concentration prediction. To meet the air quality of the public transport vehicles recommend standards, the KTX cabin of CO2 concentration should be managed. In this study, the concentration change was predicted by CO2 prediction simulation in route to be opened.

Keywords: CO2 prediction, KTX, ventilation, infrastructure and transportation engineering

Procedia PDF Downloads 509
9648 Quality of Life after Damage Control Laparotomy for Trauma

Authors: Noman Shahzad, Amyn Pardhan, Hasnain Zafar

Abstract:

Introduction: Though short term survival advantage of damage control laparotomy in management of critically ill trauma patients is established, there is little known about the long-term quality of life of these patients. Facial closure rate after damage control laparotomy is reported to be 20-70 percent. Abdominal wall reconstruction in those who failed to achieve facial closure is challenging and can potentially affect quality of life of these patients. Methodology: We conducted retrospective matched cohort study. Adult patients who underwent damage control laparotomy from Jan 2007 till Jun 2013 were identified through medical record. Patients who had concomitant disabling brain injury or limb injuries requiring amputation were excluded. Age, gender and presentation time matched non exposure group of patients who underwent laparotomy for trauma but no damage control were identified for each damage control laparotomy patient. Quality of life assessment was done via telephonic interview at least one year after the operation, using Urdu version of EuroQol Group Quality of Life (QOL) questionnaire EQ5D after permission. Wilcoxon signed rank test was used to compare QOL scores and McNemar test was used to compare individual parameters of QOL questionnaire. Study was approved by institutional ethical review committee. Results: Out of 32 patients who underwent damage control laparotomy during study period, 20 fulfilled the selection criteria for which 20 matched controls were selected. Median age of patients (IQ Range) was 33 (26-40) years. Facial closure rate in damage control laparotomy group was 40% (8/20). One third of those who did not achieve facial closure (4/12) underwent abdominal wall reconstruction. Self-reported QOL score of damage control laparotomy patients was significantly worse than non-damage control group (p = 0.032). There was no statistically significant difference in two groups regarding individual QOL measures. Significantly, more patients in damage control group were requiring use of abdominal binder, and more patients in damage control group had to either change their job or had limitations in continuing previous job. Our study was not adequately powered to detect factors responsible for worse QOL in damage control group. Conclusion: Quality of life of damage control patients is worse than their age and gender matched patients who underwent trauma laparotomy but not damage control. Adequately powered studies need to be conducted to explore factors responsible for this finding for potential improvement.

Keywords: damage control laparotomy, laparostomy, quality of life

Procedia PDF Downloads 248
9647 Visualization of Corrosion at Plate-Like Structures Based on Ultrasonic Wave Propagation Images

Authors: Aoqi Zhang, Changgil Lee Lee, Seunghee Park

Abstract:

A non-contact nondestructive technique using laser-induced ultrasonic wave generation method was applied to visualize corrosion damage at aluminum alloy plate structures. The ultrasonic waves were generated by a Nd:YAG pulse laser, and a galvanometer-based laser scanner was used to scan specific area at a target structure. At the same time, wave responses were measured at a piezoelectric sensor which was attached on the target structure. The visualization of structural damage was achieved by calculating logarithmic values of root mean square (RMS). Damage-sensitive feature was defined as the scattering characteristics of the waves that encounter corrosion damage. The corroded damage was artificially formed by hydrochloric acid. To observe the effect of the location where the corrosion was formed, the both sides of the plate were scanned with same scanning area. Also, the effect on the depth of the corrosion was considered as well as the effect on the size of the corrosion. The results indicated that the damages were successfully visualized for almost cases, whether the damages were formed at the front or back side. However, the damage could not be clearly detected because the depth of the corrosion was shallow. In the future works, it needs to develop signal processing algorithm to more clearly visualize the damage by improving signal-to-noise ratio.

Keywords: non-destructive testing, corrosion, pulsed laser scanning, ultrasonic waves, plate structure

Procedia PDF Downloads 277
9646 Damage Detection in a Cantilever Beam under Different Excitation and Temperature Conditions

Authors: A. Kyprianou, A. Tjirkallis

Abstract:

Condition monitoring of structures in service is very important as it provides information about the risk of damage development. One of the essential constituents of structural condition monitoring is the damage detection methodology. In the context of condition monitoring of in service structures a damage detection methodology analyses data obtained from the structure while it is in operation. Usually, this means that the data could be affected by operational and environmental conditions in a way that could mask the effects of a possible damage on the data. This, depending on the damage detection methodology, could lead to either false alarms or miss existing damages. In this article a damage detection methodology that is based on the Spatio-temporal continuous wavelet transform (SPT-CWT) analysis of a sequence of experimental time responses of a cantilever beam is proposed. The cantilever is subjected to white and pink noise excitation to simulate different operating conditions. In addition, in order to simulate changing environmental conditions, the cantilever is subjected to heating by a heat gun. The response of the cantilever beam is measured by a high-speed camera. Edges are extracted from the series of images of the beam response captured by the camera. Subsequent processing of the edges gives a series of time responses on 439 points on the beam. This sequence is then analyzed using the SPT-CWT to identify damage. The algorithm proposed was able to clearly identify damage under any condition when the structure was excited by white noise force. In addition, in the case of white noise excitation, the analysis could also reveal the position of the heat gun when it was used to heat the structure. The analysis could identify the different operating conditions i.e. between responses due to white noise excitation and responses due to pink noise excitation. During the pink noise excitation whereas damage and changing temperature were identified it was not possible to clearly identify the effect of damage from that of temperature. The methodology proposed in this article for damage detection enables the separation the damage effect from that due to temperature and excitation on data obtained from measurements of a cantilever beam. This methodology does not require information about the apriori state of the structure.

Keywords: spatiotemporal continuous wavelet transform, damage detection, data normalization, varying temperature

Procedia PDF Downloads 258
9645 Non-Destructive Prediction System Using near Infrared Spectroscopy for Crude Palm Oil

Authors: Siti Nurhidayah Naqiah Abdull Rani, Herlina Abdul Rahim

Abstract:

Near infrared (NIR) spectroscopy has always been of great interest in the food and agriculture industries. The development of predictive models has facilitated the estimation process in recent years. In this research, 176 crude palm oil (CPO) samples acquired from Felda Johor Bulker Sdn Bhd were studied. A FOSS NIRSystem was used to tak e absorbance measurements from the sample. The wavelength range for the spectral measurement is taken at 1600nm to 1900nm. Partial Least Square Regression (PLSR) prediction model with 50 optimal number of principal components was implemented to study the relationship between the measured Free Fatty Acid (FFA) values and the measured spectral absorption. PLSR showed predictive ability of FFA values with correlative coefficient (R) of 0.9808 for the training set and 0.9684 for the testing set.

Keywords: palm oil, fatty acid, NIRS, PLSR

Procedia PDF Downloads 180
9644 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers

Authors: Nishank Raisinghani

Abstract:

Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.

Keywords: drug discovery, transformers, graph neural networks, multiomics

Procedia PDF Downloads 111
9643 Calibration Model of %Titratable Acidity (Citric Acid) for Intact Tomato by Transmittance SW-NIR Spectroscopy

Authors: K. Petcharaporn, S. Kumchoo

Abstract:

The acidity (citric acid) is one of the chemical contents that can refer to the internal quality and the maturity index of tomato. The titratable acidity (%TA) can be predicted by a non-destructive method prediction by using the transmittance short wavelength (SW-NIR). Spectroscopy in the wavelength range between 665-955 nm. The set of 167 tomato samples divided into groups of 117 tomatoes sample for training set and 50 tomatoes sample for test set were used to establish the calibration model to predict and measure %TA by partial least squares regression (PLSR) technique. The spectra were pretreated with MSC pretreatment and it gave the optimal result for calibration model as (R = 0.92, RMSEC = 0.03%) and this model obtained high accuracy result to use for %TA prediction in test set as (R = 0.81, RMSEP = 0.05%). From the result of prediction in test set shown that the transmittance SW-NIR spectroscopy technique can be used for a non-destructive method for %TA prediction of tomatoes.

Keywords: tomato, quality, prediction, transmittance, titratable acidity, citric acid

Procedia PDF Downloads 241
9642 A Review on Water Models of Surface Water Environment

Authors: Shahbaz G. Hassan

Abstract:

Water quality models are very important to predict the changes in surface water quality for environmental management. The aim of this paper is to give an overview of the water qualities, and to provide directions for selecting models in specific situation. Water quality models include one kind of model based on a mechanistic approach, while other models simulate water quality without considering a mechanism. Mechanistic models can be widely applied and have capabilities for long-time simulation, with highly complexity. Therefore, more spaces are provided to explain the principle and application experience of mechanistic models. Mechanism models have certain assumptions on rivers, lakes and estuaries, which limits the application range of the model, this paper introduces the principles and applications of water quality model based on the above three scenarios. On the other hand, mechanistic models are more easily to compute, and with no limit to the geographical conditions, but they cannot be used with confidence to simulate long term changes. This paper divides the empirical models into two broad categories according to the difference of mathematical algorithm, models based on artificial intelligence and models based on statistical methods.

Keywords: empirical models, mathematical, statistical, water quality

Procedia PDF Downloads 231
9641 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 111
9640 Neural Network Based Path Loss Prediction for Global System for Mobile Communication in an Urban Environment

Authors: Danladi Ali

Abstract:

In this paper, we measured GSM signal strength in the Dnepropetrovsk city in order to predict path loss in study area using nonlinear autoregressive neural network prediction and we also, used neural network clustering to determine average GSM signal strength receive at the study area. The nonlinear auto-regressive neural network predicted that the GSM signal is attenuated with the mean square error (MSE) of 2.6748dB, this attenuation value is used to modify the COST 231 Hata and the Okumura-Hata models. The neural network clustering revealed that -75dB to -95dB is received more frequently. This means that the signal strength received at the study is mostly weak signal

Keywords: one-dimensional multilevel wavelets, path loss, GSM signal strength, propagation, urban environment and model

Procedia PDF Downloads 353
9639 Enhancing a Recidivism Prediction Tool with Machine Learning: Effectiveness and Algorithmic Fairness

Authors: Marzieh Karimihaghighi, Carlos Castillo

Abstract:

This work studies how Machine Learning (ML) may be used to increase the effectiveness of a criminal recidivism risk assessment tool, RisCanvi. The two key dimensions of this analysis are predictive accuracy and algorithmic fairness. ML-based prediction models obtained in this study are more accurate at predicting criminal recidivism than the manually-created formula used in RisCanvi, achieving an AUC of 0.76 and 0.73 in predicting violent and general recidivism respectively. However, the improvements are small, and it is noticed that algorithmic discrimination can easily be introduced between groups such as national vs foreigner, or young vs old. It is described how effectiveness and algorithmic fairness objectives can be balanced, applying a method in which a single error disparity in terms of generalized false positive rate is minimized, while calibration is maintained across groups. Obtained results show that this bias mitigation procedure can substantially reduce generalized false positive rate disparities across multiple groups. Based on these results, it is proposed that ML-based criminal recidivism risk prediction should not be introduced without applying algorithmic bias mitigation procedures.

Keywords: algorithmic fairness, criminal risk assessment, equalized odds, recidivism

Procedia PDF Downloads 124
9638 An Adaptive Hybrid Surrogate-Assisted Particle Swarm Optimization Algorithm for Expensive Structural Optimization

Authors: Xiongxiong You, Zhanwen Niu

Abstract:

Choosing an appropriate surrogate model plays an important role in surrogates-assisted evolutionary algorithms (SAEAs) since there are many types and different kernel functions in the surrogate model. In this paper, an adaptive selection of the best suitable surrogate model method is proposed to solve different kinds of expensive optimization problems. Firstly, according to the prediction residual error sum of square (PRESS) and different model selection strategies, the excellent individual surrogate models are integrated into multiple ensemble models in each generation. Then, based on the minimum root of mean square error (RMSE), the best suitable surrogate model is selected dynamically. Secondly, two methods with dynamic number of models and selection strategies are designed, which are used to show the influence of the number of individual models and selection strategy. Finally, some compared studies are made to deal with several commonly used benchmark problems, as well as a rotor system optimization problem. The results demonstrate the accuracy and robustness of the proposed method.

Keywords: adaptive selection, expensive optimization, rotor system, surrogates assisted evolutionary algorithms

Procedia PDF Downloads 115
9637 Importance of Solubility and Bubble Pressure Models to Predict Pressure of Nitrified Oil Based Drilling Fluid in Dual Gradient Drilling

Authors: Sajjad Negahban, Ruihe Wang, Baojiang Sun

Abstract:

Gas-lift dual gradient drilling is a solution for deepwater drilling challenges. As well, Continuous development of drilling technology leads to increase employment of mineral oil based drilling fluids and synthetic-based drilling fluids, which have adequate characteristics such as: high rate of penetration, lubricity, shale inhibition and low toxicity. The paper discusses utilization of nitrified mineral oil base drilling for deepwater drilling and for more accurate prediction of pressure in DGD at marine riser, solubility and bubble pressure were considered in steady state hydraulic model. The Standing bubble pressure and solubility correlations, and two models which were acquired from experimental determination were applied in hydraulic model. The effect of the black oil correlations, and new solubility and bubble pressure models was evaluated on the PVT parameters such as oil formation volume factor, density, viscosity, volumetric flow rate. Eventually, the consequent simulated pressure profile due to these models was presented.

Keywords: solubility, bubble pressure, gas-lift dual gradient drilling, steady state hydraulic model

Procedia PDF Downloads 242