Search results for: prediction models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7915

Search results for: prediction models

7495 A Hybrid Model of Structural Equation Modelling-Artificial Neural Networks: Prediction of Influential Factors on Eating Behaviors

Authors: Maryam Kheirollahpour, Mahmoud Danaee, Amir Faisal Merican, Asma Ahmad Shariff

Abstract:

Background: The presence of nonlinearity among the risk factors of eating behavior causes a bias in the prediction models. The accuracy of estimation of eating behaviors risk factors in the primary prevention of obesity has been established. Objective: The aim of this study was to explore the potential of a hybrid model of structural equation modeling (SEM) and Artificial Neural Networks (ANN) to predict eating behaviors. Methods: The Partial Least Square-SEM (PLS-SEM) and a hybrid model (SEM-Artificial Neural Networks (SEM-ANN)) were applied to evaluate the factors affecting eating behavior patterns among university students. 340 university students participated in this study. The PLS-SEM analysis was used to check the effect of emotional eating scale (EES), body shape concern (BSC), and body appreciation scale (BAS) on different categories of eating behavior patterns (EBP). Then, the hybrid model was conducted using multilayer perceptron (MLP) with feedforward network topology. Moreover, Levenberg-Marquardt, which is a supervised learning model, was applied as a learning method for MLP training. The Tangent/sigmoid function was used for the input layer while the linear function applied for the output layer. The coefficient of determination (R²) and mean square error (MSE) was calculated. Results: It was proved that the hybrid model was superior to PLS-SEM methods. Using hybrid model, the optimal network happened at MPLP 3-17-8, while the R² of the model was increased by 27%, while, the MSE was decreased by 9.6%. Moreover, it was found that which one of these factors have significantly affected on healthy and unhealthy eating behavior patterns. The p-value was reported to be less than 0.01 for most of the paths. Conclusion/Importance: Thus, a hybrid approach could be suggested as a significant methodological contribution from a statistical standpoint, and it can be implemented as software to be able to predict models with the highest accuracy.

Keywords: hybrid model, structural equation modeling, artificial neural networks, eating behavior patterns

Procedia PDF Downloads 121
7494 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration

Authors: Danny Barash

Abstract:

Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.

Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods

Procedia PDF Downloads 210
7493 Detecting Earnings Management via Statistical and Neural Networks Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange

Procedia PDF Downloads 399
7492 Pre-Operative Tool for Facial-Post-Surgical Estimation and Detection

Authors: Ayat E. Ali, Christeen R. Aziz, Merna A. Helmy, Mohammed M. Malek, Sherif H. El-Gohary

Abstract:

Goal: Purpose of the project was to make a plastic surgery prediction by using pre-operative images for the plastic surgeries’ patients and to show this prediction on a screen to compare between the current case and the appearance after the surgery. Methods: To this aim, we implemented a software which used data from the internet for facial skin diseases, skin burns, pre-and post-images for plastic surgeries then the post- surgical prediction is done by using K-nearest neighbor (KNN). So we designed and fabricated a smart mirror divided into two parts a screen and a reflective mirror so patient's pre- and post-appearance will be showed at the same time. Results: We worked on some skin diseases like vitiligo, skin burns and wrinkles. We classified the three degrees of burns using KNN classifier with accuracy 60%. We also succeeded in segmenting the area of vitiligo. Our future work will include working on more skin diseases, classify them and give a prediction for the look after the surgery. Also we will go deeper into facial deformities and plastic surgeries like nose reshaping and face slim down. Conclusion: Our project will give a prediction relates strongly to the real look after surgery and decrease different diagnoses among doctors. Significance: The mirror may have broad societal appeal as it will make the distance between patient's satisfaction and the medical standards smaller.

Keywords: k-nearest neighbor (knn), face detection, vitiligo, bone deformity

Procedia PDF Downloads 134
7491 Artificial Neural Networks and Geographic Information Systems for Coastal Erosion Prediction

Authors: Angeliki Peponi, Paulo Morgado, Jorge Trindade

Abstract:

Artificial Neural Networks (ANNs) and Geographic Information Systems (GIS) are applied as a robust tool for modeling and forecasting the erosion changes in Costa Caparica, Lisbon, Portugal, for 2021. ANNs present noteworthy advantages compared with other methods used for prediction and decision making in urban coastal areas. Multilayer perceptron type of ANNs was used. Sensitivity analysis was conducted on natural and social forces and dynamic relations in the dune-beach system of the study area. Variations in network’s parameters were performed in order to select the optimum topology of the network. The developed methodology appears fitted to reality; however further steps would make it better suited.

Keywords: artificial neural networks, backpropagation, coastal urban zones, erosion prediction

Procedia PDF Downloads 357
7490 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 112
7489 Survival Analysis Based Delivery Time Estimates for Display FAB

Authors: Paul Han, Jun-Geol Baek

Abstract:

In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.

Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model

Procedia PDF Downloads 506
7488 Metrology-Inspired Methods to Assess the Biases of Artificial Intelligence Systems

Authors: Belkacem Laimouche

Abstract:

With the field of artificial intelligence (AI) experiencing exponential growth, fueled by technological advancements that pave the way for increasingly innovative and promising applications, there is an escalating need to develop rigorous methods for assessing their performance in pursuit of transparency and equity. This article proposes a metrology-inspired statistical framework for evaluating bias and explainability in AI systems. Drawing from the principles of metrology, we propose a pioneering approach, using a concrete example, to evaluate the accuracy and precision of AI models, as well as to quantify the sources of measurement uncertainty that can lead to bias in their predictions. Furthermore, we explore a statistical approach for evaluating the explainability of AI systems based on their ability to provide interpretable and transparent explanations of their predictions.

Keywords: artificial intelligence, metrology, measurement uncertainty, prediction error, bias, machine learning algorithms, probabilistic models, interlaboratory comparison, data analysis, data reliability, measurement of bias impact on predictions, improvement of model accuracy and reliability

Procedia PDF Downloads 81
7487 Hybrid Inventory Model Optimization under Uncertainties: A Case Study in a Manufacturing Plant

Authors: E. Benga, T. Tengen, A. Alugongo

Abstract:

Periodic and continuous inventory models are the two classical management tools used to handle inventories. These models have advantages and disadvantages. The implementation of both continuous (r,Q) inventory and periodic (R, S) inventory models in most manufacturing plants comes with higher cost. Such high inventory costs are due to the fact that most manufacturing plants are not flexible enough. Since demand and lead-time are two important variables of every inventory models, their effect on the flexibility of the manufacturing plant matter most. Unfortunately, these effects are not clearly understood by managers. The reason is that the decision parameters of the continuous (r, Q) inventory and periodic (R, S) inventory models are not designed to effectively deal with the issues of uncertainties such as poor manufacturing performances, delivery performance supplies performances. There is, therefore, a need to come up with a predictive and hybrid inventory model that can combine in some sense the feature of the aforementioned inventory models. A linear combination technique is used to hybridize both continuous (r, Q) inventory and periodic (R, S) inventory models. The behavior of such hybrid inventory model is described by a differential equation and then optimized. From the results obtained after simulation, the continuous (r, Q) inventory model is more effective than the periodic (R, S) inventory models in the short run, but this difference changes as time goes by. Because the hybrid inventory model is more cost effective than the continuous (r,Q) inventory and periodic (R, S) inventory models in long run, it should be implemented for strategic decisions.

Keywords: periodic inventory, continuous inventory, hybrid inventory, optimization, manufacturing plant

Procedia PDF Downloads 361
7486 ARIMA-GARCH, A Statistical Modeling for Epileptic Seizure Prediction

Authors: Salman Mohamadi, Seyed Mohammad Ali Tayaranian Hosseini, Hamidreza Amindavar

Abstract:

In this paper, we provide a procedure to analyze and model EEG (electroencephalogram) signal as a time series using ARIMA-GARCH to predict an epileptic attack. The heteroskedasticity of EEG signal is examined through the ARCH or GARCH, (Autore- gressive conditional heteroskedasticity, Generalized autoregressive conditional heteroskedasticity) test. The best ARIMA-GARCH model in AIC sense is utilized to measure the volatility of the EEG from epileptic canine subjects, to forecast the future values of EEG. ARIMA-only model can perform prediction, but the ARCH or GARCH model acting on the residuals of ARIMA attains a con- siderable improved forecast horizon. First, we estimate the best ARIMA model, then different orders of ARCH and GARCH modelings are surveyed to determine the best heteroskedastic model of the residuals of the mentioned ARIMA. Using the simulated conditional variance of selected ARCH or GARCH model, we suggest the procedure to predict the oncoming seizures. The results indicate that GARCH modeling determines the dynamic changes of variance well before the onset of seizure. It can be inferred that the prediction capability comes from the ability of the combined ARIMA-GARCH modeling to cover the heteroskedastic nature of EEG signal changes.

Keywords: epileptic seizure prediction , ARIMA, ARCH and GARCH modeling, heteroskedasticity, EEG

Procedia PDF Downloads 378
7485 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 121
7484 Effects of Global Validity of Predictive Cues upon L2 Discourse Comprehension: Evidence from Self-paced Reading

Authors: Binger Lu

Abstract:

It remains unclear whether second language (L2) speakers could use discourse context cues to predict upcoming information as native speakers do during online comprehension. Some researchers propose that L2 learners may have a reduced ability to generate predictions during discourse processing. At the same time, there is evidence that discourse-level cues are weighed more heavily in L2 processing than in L1. Previous studies showed that L1 prediction is sensitive to the global validity of predictive cues. The current study aims to explore whether and to what extent L2 learners can dynamically and strategically adjust their prediction in accord with the global validity of predictive cues in L2 discourse comprehension as native speakers do. In a self-paced reading experiment, Chinese native speakers (N=128), C-E bilinguals (N=128), and English native speakers (N=128) read high-predictable (e.g., Jimmy felt thirsty after running. He wanted to get some water from the refrigerator.) and low-predictable (e.g., Jimmy felt sick this morning. He wanted to get some water from the refrigerator.) discourses in two-sentence frames. The global validity of predictive cues was manipulated by varying the ratio of predictable (e.g., Bill stood at the door. He opened it with the key.) and unpredictable fillers (e.g., Bill stood at the door. He opened it with the card.), such that across conditions, the predictability of the final word of the fillers ranged from 100% to 0%. The dependent variable was reading time on the critical region (the target word and the following word), analyzed with linear mixed-effects models in R. C-E bilinguals showed reliable prediction across all validity conditions (β = -35.6 ms, SE = 7.74, t = -4.601, p< .001), and Chinese native speakers showed significant effect (β = -93.5 ms, SE = 7.82, t = -11.956, p< .001) in two of the four validity conditions (namely, the High-validity and MedLow conditions, where fillers ended with predictable words in 100% and 25% cases respectively), whereas English native speakers didn’t predict at all (β = -2.78 ms, SE = 7.60, t = -.365, p = .715). There was neither main effect (χ^²(3) = .256, p = .968) nor interaction (Predictability: Background: Validity, χ^²(3) = 1.229, p = .746; Predictability: Validity, χ^²(3) = 2.520, p = .472; Background: Validity, χ^²(3) = 1.281, p = .734) of Validity with speaker groups. The results suggest that prediction occurs in L2 discourse processing but to a much less extent in L1, witha significant effect in some conditions of L1 Chinese and anull effect in L1 English processing, consistent with the view that L2 speakers are more sensitive to discourse cues compared with L1 speakers. Additionally, the pattern of L1 and L2 predictive processing was not affected by the global validity of predictive cues. C-E bilinguals’ predictive processing could be partly transferred from their L1, as prior research showed that discourse information played a more significant role in L1 Chinese processing.

Keywords: bilingualism, discourse processing, global validity, prediction, self-paced reading

Procedia PDF Downloads 112
7483 The Impact of COVID-19 on Antibiotic Prescribing in Primary Care in England: Evaluation and Risk Prediction of the Appropriateness of Type and Repeat Prescribing

Authors: Xiaomin Zhong, Alexander Pate, Ya-Ting Yang, Ali Fahmi, Darren M. Ashcroft, Ben Goldacre, Brian Mackenna, Amir Mehrkar, Sebastian C. J. Bacon, Jon Massey, Louis Fisher, Peter Inglesby, Kieran Hand, Tjeerd van Staa, Victoria Palin

Abstract:

Background: This study aimed to predict risks of potentially inappropriate antibiotic type and repeat prescribing and assess changes during COVID-19. Methods: With the approval of NHS England, we used the OpenSAFELY platform to access the TPP SystmOne electronic health record (EHR) system and selected patients prescribed antibiotics from 2019 to 2021. Multinomial logistic regression models predicted the patient’s probability of receiving an inappropriate antibiotic type or repeating the antibiotic course for each common infection. Findings: The population included 9.1 million patients with 29.2 million antibiotic prescriptions. 29.1% of prescriptions were identified as repeat prescribing. Those with same-day incident infection coded in the EHR had considerably lower rates of repeat prescribing (18.0%), and 8.6% had a potentially inappropriate type. No major changes in the rates of repeat antibiotic prescribing during COVID-19 were found. In the ten risk prediction models, good levels of calibration and moderate levels of discrimination were found. Important predictors included age, prior antibiotic prescribing, and region. Patients varied in their predicted risks. For sore throat, the range from 2.5 to 97.5th percentile was 2.7 to 23.5% (inappropriate type) and 6.0 to 27.2% (repeat prescription). For otitis externa, these numbers were 25.9 to 63.9% and 8.5 to 37.1%, respectively. Interpretation: Our study found no evidence of changes in the level of inappropriate or repeat antibiotic prescribing after the start of COVID-19. Repeat antibiotic prescribing was frequent and varied according to regional and patient characteristics. There is a need for treatment guidelines to be developed around antibiotic failure and clinicians provided with individualised patient information.

Keywords: antibiotics, infection, COVID-19 pandemic, antibiotic stewardship, primary care

Procedia PDF Downloads 79
7482 Legal Judgment Prediction through Indictments via Data Visualization in Chinese

Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun

Abstract:

Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.

Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization

Procedia PDF Downloads 99
7481 Estimation of the Drought Index Based on the Climatic Projections of Precipitation of the Uruguay River Basin

Authors: José Leandro Melgar Néris, Claudinéia Brazil, Luciane Teresa Salvi, Isabel Cristina Damin

Abstract:

The impact the climate change is not recent, the main variable in the hydrological cycle is the sequence and shortage of a drought, which has a significant impact on the socioeconomic, agricultural and environmental spheres. This study aims to characterize and quantify, based on precipitation climatic projections, the rainy and dry events in the region of the Uruguay River Basin, through the Standardized Precipitation Index (SPI). The database is the image that is part of the Intercomparison of Model Models, Phase 5 (CMIP5), which provides condition prediction models, organized according to the Representative Routes of Concentration (CPR). Compared to the normal set of climates in the Uruguay River Watershed through precipitation projections, seasonal precipitation increases for all proposed scenarios, with a low climate trend. From the data of this research, the idea is that this article can be used to support research and the responsible bodies can use it as a subsidy for mitigation measures in other hydrographic basins.

Keywords: climate change, climatic model, dry events, precipitation projections

Procedia PDF Downloads 118
7480 Prediction of Marijuana Use among Iranian Early Youth: an Application of Integrative Model of Behavioral Prediction

Authors: Mehdi Mirzaei Alavijeh, Farzad Jalilian

Abstract:

Background: Marijuana is the most widely used illicit drug worldwide, especially among adolescents and young adults, which can cause numerous complications. The aim of this study was to determine the pattern, motivation use, and factors related to marijuana use among Iranian youths based on the integrative model of behavioral prediction Methods: A cross-sectional study was conducted among 174 youths marijuana user in Kermanshah County and Isfahan County, during summer 2014 which was selected with the convenience sampling for participation in this study. A self-reporting questionnaire was applied for collecting data. Data were analyzed by SPSS version 21 using bivariate correlations and linear regression statistical tests. Results: The mean marijuana use of respondents was 4.60 times at during week [95% CI: 4.06, 5.15]. Linear regression statistical showed, the structures of integrative model of behavioral prediction accounted for 36% of the variation in the outcome measure of the marijuana use at during week (R2 = 36% & P < 0.001); and among them attitude, marijuana refuse, and subjective norms were a stronger predictors. Conclusion: Comprehensive health education and prevention programs need to emphasize on cognitive factors that predict youth’s health-related behaviors. Based on our findings it seems, designing educational and behavioral intervention for reducing positive belief about marijuana, marijuana self-efficacy refuse promotion and reduce subjective norms encourage marijuana use has an effective potential to protect youths marijuana use.

Keywords: marijuana, youth, integrative model of behavioral prediction, Iran

Procedia PDF Downloads 533
7479 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 43
7478 Dividend Policy, Overconfidence and Moral Hazard

Authors: Richard Fairchild, Abdullah Al-Ghazali, Yilmaz Guney

Abstract:

This study analyses the relationship between managerial overconfidence, dividends, and firm value by developing theoretical models that examine the condition under which managerial overconfident, dividends, and firm value may be positive or negative. Furthermore, the models incorporate moral hazard, in terms of managerial effort shirking, and the potential for the manager to choose negative NPV projects, due to private benefits. Our models demonstrate that overconfidence can lead to higher dividends (when the manager is overconfident about his current ability) or lower dividends (when the manager is overconfident about his future ability). The models also demonstrate that higher overconfidence may result in an increase or a decrease in firm value. Numerical examples are illustrated for both models which interestingly support the models’ propositions.

Keywords: behavioural corporate finance, dividend policy, overconfidence, moral hazard

Procedia PDF Downloads 309
7477 Efficient Layout-Aware Pretraining for Multimodal Form Understanding

Authors: Armineh Nourbakhsh, Sameena Shah, Carolyn Rose

Abstract:

Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction.

Keywords: layout understanding, form understanding, multimodal document understanding, bias-augmented attention

Procedia PDF Downloads 123
7476 Review of Models of Consumer Behaviour and Influence of Emotions in the Decision Making

Authors: Mikel Alonso López

Abstract:

In order to begin the process of studying the task of making consumer decisions, the main decision models must be analyzed. The objective of this task is to see if there is a presence of emotions in those models, and analyze how authors that have created them consider their impact in consumer choices. In this paper, the most important models of consumer behavior are analysed. This review is useful to consider an unproblematic background knowledge in the literature. The order that has been established for this study is chronological.

Keywords: consumer behaviour, emotions, decision making, consumer psychology

Procedia PDF Downloads 412
7475 An Approach to Low Velocity Impact Damage Modelling of Variable Stiffness Curved Composite Plates

Authors: Buddhi Arachchige, Hessam Ghasemnejad

Abstract:

In this study, the post impact behavior of curved composite plates subjected to low velocity impact was studied analytically and numerically. Approaches to damage modelling are proposed through the degradation of stiffness in the damaged region by reduction of thickness in the damage region. Spring-mass models were used to model the impact response of the plate and impactor. The study involved designing two damage models to compare and contrast the model best fitted with the numerical results. The theoretical force-time responses were compared with the numerical results obtained through a detailed study carried out in LS-DYNA. The modified damage model established a good prediction with the analytical force-time response for different layups and geometry. This study provides a gateway in selecting the most effective layups for variable stiffness curved composite panels able to withstand a higher impact damage.

Keywords: analytical modelling, composite damage, impact, variable stiffness

Procedia PDF Downloads 249
7474 Mobile Based Long Range Weather Prediction System for the Farmers of Rural Areas of Pakistan

Authors: Zeeshan Muzammal, Usama Latif, Fouzia Younas, Syed Muhammad Hassan, Samia Razaq

Abstract:

Unexpected rainfall has always been an issue in the lifetime of crops and brings destruction for the farmers who harvest them. Unfortunately, Pakistan is one of the countries in which untimely rain impacts badly on crops like wash out of seeds and pesticides etc. Pakistan’s GDP is related to agriculture, especially in rural areas farmers sometimes quit farming because leverage of huge loss to their crops. Through our surveys and research, we came to know that farmers in the rural areas of Pakistan need rain information to avoid damages to their crops from rain. We developed a prototype using ICTs to inform the farmers about rain one week in advance. Our proposed solution has two ways of informing the farmers. In first we send daily messages about weekly prediction and also designed a helpline where they can call us to ask about possibility of rain.

Keywords: ICTD, farmers, mobile based, Pakistan, rural areas, weather prediction

Procedia PDF Downloads 543
7473 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 280
7472 Prediction of Concrete Hydration Behavior and Cracking Tendency Based on Electrical Resistivity Measurement, Cracking Test and ANSYS Simulation

Authors: Samaila Muazu Bawa

Abstract:

Hydration process, crack potential and setting time of concrete grade C30, C40 and C50 were separately monitored using non-contact electrical resistivity apparatus, a plastic ring mould and penetration resistance method respectively. The results show highest resistivity of C30 at the beginning until reaching the acceleration point when C50 accelerated and overtaken the others, and this period corresponds to its final setting time range, from resistivity derivative curve, hydration process can be divided into dissolution, induction, acceleration and deceleration periods, restrained shrinkage crack and setting time tests demonstrated the earliest cracking and setting time of C50, therefore, this method conveniently and rapidly determines the concrete’s crack potential. The highest inflection time (ti), the final setting time (tf) were obtained and used with crack time in coming up with mathematical models for the prediction of concrete’s cracking age for the range being considered. Finally, ANSYS numerical simulations supports the experimental findings in terms of the earliest crack age of C50 and the crack location that, highest stress concentration is always beneath the artificially introduced expansion joint of C50.

Keywords: concrete hydration, electrical resistivity, restrained shrinkage crack, ANSYS simulation

Procedia PDF Downloads 219
7471 Reliability Estimation of Bridge Structures with Updated Finite Element Models

Authors: Ekin Ozer

Abstract:

Assessment of structural reliability is essential for efficient use of civil infrastructure which is subjected hazardous events. Dynamic analysis of finite element models is a commonly used tool to simulate structural behavior and estimate its performance accordingly. However, theoretical models purely based on preliminary assumptions and design drawings may deviate from the actual behavior of the structure. This study proposes up-to-date reliability estimation procedures which engages actual bridge vibration data modifying finite element models for finite element model updating and performing reliability estimation, accordingly. The proposed method utilizes vibration response measurements of bridge structures to identify modal parameters, then uses these parameters to calibrate finite element models which are originally based on design drawings. The proposed method does not only show that reliability estimation based on updated models differs from the original models, but also infer that non-updated models may overestimate the structural capacity.

Keywords: earthquake engineering, engineering vibrations, reliability estimation, structural health monitoring

Procedia PDF Downloads 181
7470 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers

Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi

Abstract:

Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.

Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics

Procedia PDF Downloads 140
7469 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations

Authors: Xiao Zhou, Jianlin Cheng

Abstract:

A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.

Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining

Procedia PDF Downloads 430
7468 Enhance the Power of Sentiment Analysis

Authors: Yu Zhang, Pedro Desouza

Abstract:

Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modelling and testing work was done in R and Greenplum in-database analytic tools.

Keywords: sentiment analysis, social media, Twitter, Amazon, data mining, machine learning, text mining

Procedia PDF Downloads 327
7467 Predicting Trapezoidal Weir Discharge Coefficient Using Evolutionary Algorithm

Authors: K. Roushanger, A. Soleymanzadeh

Abstract:

Weirs are structures often used in irrigation techniques, sewer networks and flood protection. However, the hydraulic behavior of this type of weir is complex and difficult to predict accurately. An accurate flow prediction over a weir mainly depends on the proper estimation of discharge coefficient. In this study, the Genetic Expression Programming (GEP) approach was used for predicting trapezoidal and rectangular sharp-crested side weirs discharge coefficient. Three different performance indexes are used as comparing criteria for the evaluation of the model’s performances. The obtained results approved capability of GEP in prediction of trapezoidal and rectangular side weirs discharge coefficient. The results also revealed the influence of downstream Froude number for trapezoidal weir and upstream Froude number for rectangular weir in prediction of the discharge coefficient for both of side weirs.

Keywords: discharge coefficient, genetic expression programming, trapezoidal weir

Procedia PDF Downloads 366
7466 Detection of Chaos in General Parametric Model of Infectious Disease

Authors: Javad Khaligh, Aghileh Heydari, Ali Akbar Heydari

Abstract:

Mathematical epidemiological models for the spread of disease through a population are used to predict the prevalence of a disease or to study the impacts of treatment or prevention measures. Initial conditions for these models are measured from statistical data collected from a population since these initial conditions can never be exact, the presence of chaos in mathematical models has serious implications for the accuracy of the models as well as how epidemiologists interpret their findings. This paper confirms the chaotic behavior of a model for dengue fever and SI by investigating sensitive dependence, bifurcation, and 0-1 test under a variety of initial conditions.

Keywords: epidemiological models, SEIR disease model, bifurcation, chaotic behavior, 0-1 test

Procedia PDF Downloads 297