Search results for: bioinformatic predictions
638 Evaluating the Suitability and Performance of Dynamic Modulus Predictive Models for North Dakota’s Asphalt Mixtures
Authors: Duncan Oteki, Andebut Yeneneh, Daba Gedafa, Nabil Suleiman
Abstract:
Most agencies lack the equipment required to measure the dynamic modulus (|E*|) of asphalt mixtures, necessitating the need to use predictive models. This study compared measured |E*| values for nine North Dakota asphalt mixes using the original Witczak, modified Witczak, and Hirsch models. The influence of temperature on the |E*| models was investigated, and Pavement ME simulations were conducted using measured |E*| and predictions from the most accurate |E*| model. The results revealed that the original Witczak model yielded the lowest Se/Sy and highest R² values, indicating the lowest bias and highest accuracy, while the poorest overall performance was exhibited by the Hirsch model. Using predicted |E*| as inputs in the Pavement ME generated conservative distress predictions compared to using measured |E*|. The original Witczak model was recommended for predicting |E*| for low-reliability pavements in North Dakota.Keywords: asphalt mixture, binder, dynamic modulus, MEPDG, pavement ME, performance, prediction
Procedia PDF Downloads 47637 Identification of CLV for Online Shoppers Using RFM Matrix: A Case Based on Features of B2C Architecture
Authors: Riktesh Srivastava
Abstract:
Online Shopping have established an astonishing evolution in the last few years. And it is now apparent that B2C architecture is becoming progressively imperative channel for even traditional brick and mortar type traders as well. In this completion knowing customers and predicting behavior are extremely important. More important, when any customer logs onto the B2C architecture, the traces of their buying patterns can be stored and used for future predictions. Such a prediction is called Customer Lifetime Value (CLV). Earlier, we used Net Present Value to do so, however, it ignores two important aspects of B2C architecture, “market risks” and “big amount of customer data”. Now, we use RFM- Recency, Frequency and Monetary Value to estimate the CLV, and as the term exemplifies, market risks, is well sheltered. Big Data Analysis is also roofed in RFM, which gives real exploration of the Big Data and lead to a better estimation for future cash flow from customers. In the present paper, 6 factors (collected from varied sources) are used to determine as to what attracts the customers to the B2C architecture. For these 6 factors, RFM is computed for 3 years (2013, 2014 and 2015) respectively. CLV and Revenue are the two parameters defined using RFM analysis, which gives the clear picture of the future predictions.Keywords: CLV, RFM, revenue, recency, frequency, monetary value
Procedia PDF Downloads 220636 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection
Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari
Abstract:
In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs
Procedia PDF Downloads 364635 Transfer Learning for Protein Structure Classification at Low Resolution
Authors: Alexander Hudson, Shaogang Gong
Abstract:
Structure determination is key to understanding protein function at a molecular level. Whilst significant advances have been made in predicting structure and function from amino acid sequence, researchers must still rely on expensive, time-consuming analytical methods to visualise detailed protein conformation. In this study, we demonstrate that it is possible to make accurate (≥80%) predictions of protein class and architecture from structures determined at low (>3A) resolution, using a deep convolutional neural network trained on high-resolution (≤3A) structures represented as 2D matrices. Thus, we provide proof of concept for high-speed, low-cost protein structure classification at low resolution, and a basis for extension to prediction of function. We investigate the impact of the input representation on classification performance, showing that side-chain information may not be necessary for fine-grained structure predictions. Finally, we confirm that high resolution, low-resolution and NMR-determined structures inhabit a common feature space, and thus provide a theoretical foundation for boosting with single-image super-resolution.Keywords: transfer learning, protein distance maps, protein structure classification, neural networks
Procedia PDF Downloads 136634 Verification and Application of Finite Element Model Developed for Flood Routing in Rivers
Authors: A. L. Qureshi, A. A. Mahessar, A. Baloch
Abstract:
Flood wave propagation in river channel flow can be enunciated by nonlinear equations of motion for unsteady flow. However, it is difficult to find analytical solution of these complex non-linear equations. Hence, verification of the numerical model should be carried out against field data and numerical predictions. This paper presents the verification of developed finite element model applying for unsteady flow in the open channels. The results of a proposed model indicate a good matching with both Preissmann scheme and HEC-RAS model for a river reach of 29 km at both sites (15 km from upstream and at downstream end) for discharge hydrographs. It also has an agreeable comparison with the Preissemann scheme for the flow depth (stage) hydrographs. The proposed model has also been applying to forecast daily discharges at 400 km downstream from Sukkur barrage, which demonstrates accurate model predictions with observed daily discharges. Hence, this model may be utilized for predicting and issuing flood warnings about flood hazardous in advance.Keywords: finite element method, Preissmann scheme, HEC-RAS, flood forecasting, Indus river
Procedia PDF Downloads 503633 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association
Authors: Jacky Liu
Abstract:
This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation
Procedia PDF Downloads 102632 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data
Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar
Abstract:
It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.Keywords: accuracy, exponential smoothing, forecasting, initial value
Procedia PDF Downloads 177631 Evaluation Methods for Question Decomposition Formalism
Authors: Aviv Yaniv, Ron Ben Arosh, Nadav Gasner, Michael Konviser, Arbel Yaniv
Abstract:
This paper introduces two methods for the evaluation of Question Decomposition Meaning Representation (QDMR) as predicted by sequence-to-sequence model and COPYNET parser for natural language questions processing, motivated by the fact that previous evaluation metrics used for this task do not take into account some characteristics of the representation, such as partial ordering structure. To this end, several heuristics to extract such partial dependencies are formulated, followed by the hereby proposed evaluation methods denoted as Proportional Graph Matcher (PGM) and Conversion to Normal String Representation (Nor-Str), designed to better capture the accuracy level of QDMR predictions. Experiments are conducted to demonstrate the efficacy of the proposed evaluation methods and show the added value suggested by one of them- the Nor-Str, for better distinguishing between high and low-quality QDMR when predicted by models such as COPYNET. This work represents an important step forward in the development of better evaluation methods for QDMR predictions, which will be critical for improving the accuracy and reliability of natural language question-answering systems.Keywords: NLP, question answering, question decomposition meaning representation, QDMR evaluation metrics
Procedia PDF Downloads 78630 Seed Priming, Treatments and Germination
Authors: Atakan Efe Akpınar, Zeynep Demir
Abstract:
Seed priming technologies are frequently used nowadays to increase the germination potential and stress tolerance of seeds. These treatments might be beneficial for native species as well as crops. Different priming treatments can be used depending on the type of plant, the morphology, and the physiology of the seed. Moreover, these may be various physical, chemical, and/or biological treatments. Aiming to improve studies about seed priming, ideas need to be brought into this technological sector related to the agri-seed industry. In this study, seed priming was carried out using some plant extracts. Firstly, some plant extracts prepared from plant leaves, roots, or fruit parts were obtained for use in priming treatments. Then, seeds were kept in solutions containing plant extracts at 20°C for 48 hours. Seeds without any treatment were evaluated as the control group. At the end of priming applications, seeds are dried superficially at 25°C. Seeds were analyzed for vigor (normal germination rate, germination time, germination index etc.). In the future, seed priming applications can expand to multidisciplinary research combining with digital, bioinformatic and molecular tools.Keywords: seed priming, plant extracts, germination, biology
Procedia PDF Downloads 76629 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka
Authors: Sakshi Dhumale, Madhushree C., Amba Shetty
Abstract:
The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability
Procedia PDF Downloads 58628 Influence of Tactile Symbol Size on Its Perceptibility in Consideration of Effect of Aging
Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada
Abstract:
We conducted perception experiments on tactile symbols to elucidate the impact of the size of these letters on the level of perceptibility. This study was based on the accessible design perspective and aimed at expanding the availability of tactile symbols for the visually impaired who are unable to read Braille characters. In particular, this study targeted people with acquired visual impairments as users of the tactile symbols. The subjects (young and elderly individuals) in this study had normal vision. They were asked to participate in the experiments to identify tactile symbols while unable to see their hand during the experiments. This study investigated the relation between the size and perceptibility of tactile symbols based on an examination using test pieces of these letters in different sizes. The results revealed that the error rates for both young and elderly subjects converged to almost 0% when 12 mm size tactile symbols were used. The findings also showed that the error rate was low and subjects could identify the symbols in 5 s when 16 mm size tactile symbols were introduced.Keywords: accessible design, tactile sense, tactile symbols, bioinformatic
Procedia PDF Downloads 351627 Bioinformatics Analysis of DGAT1 Gene in Domestic Ruminnants
Authors: Sirous Eydivandi
Abstract:
Diacylglycerol-O-acyltransferase (DGAT1) gene encodes diacylglycerol transferase enzyme that plays an important role in glycerol lipid metabolism. DGAT1 is considered to be the key enzyme in controlling the synthesis of triglycerides in adipocytes. This enzyme catalyzes the final step of triglyceride synthesis (transform triacylglycerol (DAG) into triacylglycerol (TAG). A total of 20 DGAT1 gene sequences and corresponding amino acids belonging to 4 species include cattle, goats, sheep and yaks were analyzed, and the differentiation within and among the species was also studied. The length of the DGAT1 gene varies greatly, from 1527 to 1785 bp, due to deletion, insertion, and stop codon mutation resulting in elongation. Observed genetic diversity was higher among species than within species, and Goat had more polymorphisms than any other species. Novel amino acid variation sites were detected within several species which might be used to illustrate the functional variation. Differentiation of the DGAT1 gene was obvious among species, and the clustering result was consistent with the taxonomy in the National Center for Biotechnology Information.Keywords: DGAT1gene, bioinformatic, ruminnants, biotechnology information
Procedia PDF Downloads 491626 Creep Analysis and Rupture Evaluation of High Temperature Materials
Authors: Yuexi Xiong, Jingwu He
Abstract:
The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines
Procedia PDF Downloads 290625 Working Title: Estimating the Power Output of Photovoltaics in Kuwait Using a Monte Carlo Approach
Authors: Mohammad Alshawaf, Rahmat Poudineh, Nawaf Alhajeri
Abstract:
The power generated from photovoltaic (PV) modules is non-dispatchable on demand due to the stochastic nature of solar radiation. The random variations in the measured intensity of solar irradiance are due to clouds and, in the case of arid regions, dust storms which decrease the intensity of intensity of solar irradiance. Therefore, modeling PV power output using average, maximum, or minimum solar irradiance values is inefficient to predict power generation reliably. The overall objective of this paper is to predict the power output of PV modules using Monte Carlo approach based the weather and solar conditions measured in Kuwait. Given the 250 Wp PV module used in study, the average daily power output is 1021 Wh/day. The maximum power was generated in April and the minimum power was generated in January 1187 Wh/day and 823 Wh/day respectively. The certainty of the daily predictions varies seasonally and according to the weather conditions. The output predictions were far more certain in the summer months, for example, the 80% certainty range for August is 89 Wh/day, whereas the 80% certainty range for April is 250 Wh/day.Keywords: Monte Carlo, solar energy, variable renewable energy, Kuwait
Procedia PDF Downloads 131624 Long- and Short-Term Impacts of COVID-19 and Gold Price on Price Volatility: A Comparative Study of MIDAS and GARCH-MIDAS Models for USA Crude Oil
Authors: Samir K. Safi
Abstract:
The purpose of this study was to compare the performance of two types of models, namely MIDAS and MIDAS-GARCH, in predicting the volatility of crude oil returns based on gold price returns and the COVID-19 pandemic. The study aimed to identify which model would provide more accurate short-term and long-term predictions and which model would perform better in handling the increased volatility caused by the pandemic. The findings of the study revealed that the MIDAS model performed better in predicting short-term and long-term volatility before the pandemic, while the MIDAS-GARCH model performed significantly better in handling the increased volatility caused by the pandemic. The study highlights the importance of selecting appropriate models to handle the complexities of real-world data and shows that the choice of model can significantly impact the accuracy of predictions. The practical implications of model selection and exploring potential methodological adjustments for future research will be highlighted and discussed.Keywords: GARCH-MIDAS, MIDAS, crude oil, gold, COVID-19, volatility
Procedia PDF Downloads 65623 An Analytical Wall Function for 2-D Shock Wave/Turbulent Boundary Layer Interactions
Authors: X. Wang, T. J. Craft, H. Iacovides
Abstract:
When handling the near-wall regions of turbulent flows, it is necessary to account for the viscous effects which are important over the thin near-wall layers. Low-Reynolds- number turbulence models do this by including explicit viscous and also damping terms which become active in the near-wall regions, and using very fine near-wall grids to properly resolve the steep gradients present. In order to overcome the cost associated with the low-Re turbulence models, a more advanced wall function approach has been implemented within OpenFoam and tested together with a standard log-law based wall function in the prediction of flows which involve 2-D shock wave/turbulent boundary layer interactions (SWTBLIs). On the whole, from the calculation of the impinging shock interaction, the three turbulence modelling strategies, the Lauder-Sharma k-ε model with Yap correction (LS), the high-Re k-ε model with standard wall function (SWF) and analytical wall function (AWF), display good predictions of wall-pressure. However, the SWF approach tends to underestimate the tendency of the flow to separate as a result of the SWTBLI. The analytical wall function, on the other hand, is able to reproduce the shock-induced flow separation and returns predictions similar to those of the low-Re model, using a much coarser mesh.Keywords: SWTBLIs, skin-friction, turbulence modeling, wall function
Procedia PDF Downloads 346622 Uncertainty in Building Energy Performance Analysis at Different Stages of the Building’s Lifecycle
Authors: Elham Delzendeh, Song Wu, Mustafa Al-Adhami, Rima Alaaeddine
Abstract:
Over the last 15 years, prediction of energy consumption has become a common practice and necessity at different stages of the building’s lifecycle, particularly, at the design and post-occupancy stages for planning and maintenance purposes. This is due to the ever-growing response of governments to address sustainability and reduction of CO₂ emission in the building sector. However, there is a level of uncertainty in the estimation of energy consumption in buildings. The accuracy of energy consumption predictions is directly related to the precision of the initial inputs used in the energy assessment process. In this study, multiple cases of large non-residential buildings at design, construction, and post-occupancy stages are investigated. The energy consumption process and inputs, and the actual and predicted energy consumption of the cases are analysed. The findings of this study have pointed out and evidenced various parameters that cause uncertainty in the prediction of energy consumption in buildings such as modelling, location data, and occupant behaviour. In addition, unavailability and insufficiency of energy-consumption-related inputs at different stages of the building’s lifecycle are classified and categorized. Understanding the roots of uncertainty in building energy analysis will help energy modellers and energy simulation software developers reach more accurate energy consumption predictions in buildings.Keywords: building lifecycle, efficiency, energy analysis, energy performance, uncertainty
Procedia PDF Downloads 137621 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 169620 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy
Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu
Abstract:
Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR
Procedia PDF Downloads 69619 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping
Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco
Abstract:
Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction
Procedia PDF Downloads 224618 Genetic Diversity and Discovery of Unique SNPs in Five Country Cultivars of Sesamum indicum by Next-Generation Sequencing
Authors: Nam-Kuk Kim, Jin Kim, Soomin Park, Changhee Lee, Mijin Chu, Seong-Hun Lee
Abstract:
In this study, we conducted whole genome re-sequencing of 10 cultivars originated from five countries including Korea, China, India, Pakistan and Ethiopia with Sesamum indicum (Zhongzho No. 13) genome as a reference. Almost 80% of the whole genome sequences of the reference genome could be covered by sequenced reads. Numerous SNP and InDel were detected by bioinformatic analysis. Among these variants, 266,051 SNPs were identified as unique to countries. Pakistan and Ethiopia had high densities of SNPs compared to other countries. Three main clusters (cluster 1: Korea, cluster 2: Pakistan and India, cluster 3: Ethiopia and China) were recovered by neighbor-joining analysis using all variants. Interestingly, some variants were detected in DGAT1 (diacylglycerol O-acyltransferase 1) and FADS (fatty acid desaturase) genes, which are known to be related with fatty acid synthesis and metabolism. These results can provide useful information to understand the regional characteristics and develop DNA markers for origin discrimination of sesame.Keywords: Sesamum indicum, NGS, SNP, DNA marker
Procedia PDF Downloads 327617 Numerical Predictions of Trajectory Stability of a High-Speed Water-Entry and Water-Exit Projectile
Authors: Lin Lu, Qiang Li, Tao Cai, Pengjun Zhang
Abstract:
In this study, a detailed analysis of trajectory stability and flow characteristics of a high-speed projectile during the water-entry and water-exit process has been investigated numerically. The Zwart-Gerber-Belamri (Z-G-B) cavitation model and the SST k-ω turbulence model based on the Reynolds Averaged Navier-Stokes (RANS) method are employed. The numerical methodology is validated by comparing the experimental photograph of cavitation shape and the experimental underwater velocity with the numerical simulation results. Based on the numerical methodology, the influences of rotational speed, water-entry and water-exit angle of the projectile on the trajectory stability and flow characteristics have been carried out in detail. The variation features of projectile trajectory and total resistance have been conducted, respectively. In addition, the cavitation characteristics of water-entry and water-exit have been presented and analyzed. Results show that it may not be applicable for the water-entry and water-exit to achieve the projectile stability through the rotation of projectile. Furthermore, there ought to be a critical water-entry angle for the water-entry stability of practical projectile. The impact of water-exit angle on the trajectory stability and cavity phenomenon is not as remarkable as that of the water-entry angle.Keywords: cavitation characteristics, high-speed projectile, numerical predictions, trajectory stability, water-entry, water-exit
Procedia PDF Downloads 136616 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 273615 Comparisons of Co-Seismic Gravity Changes between GRACE Observations and the Predictions from the Finite-Fault Models for the 2012 Mw = 8.6 Indian Ocean Earthquake Off-Sumatra
Authors: Armin Rahimi
Abstract:
The Gravity Recovery and Climate Experiment (GRACE) has been a very successful project in determining math redistribution within the Earth system. Large deformations caused by earthquakes are in the high frequency band. Unfortunately, GRACE is only capable to provide reliable estimate at the low-to-medium frequency band for the gravitational changes. In this study, we computed the gravity changes after the 2012 Mw8.6 Indian Ocean earthquake off-Sumatra using the GRACE Level-2 monthly spherical harmonic (SH) solutions released by the University of Texas Center for Space Research (UTCSR). Moreover, we calculated gravity changes using different fault models derived from teleseismic data. The model predictions showed non-negligible discrepancies in gravity changes. However, after removing high-frequency signals, using Gaussian filtering 350 km commensurable GRACE spatial resolution, the discrepancies vanished, and the spatial patterns of total gravity changes predicted from all slip models became similar at the spatial resolution attainable by GRACE observations, and predicted-gravity changes were consistent with the GRACE-detected gravity changes. Nevertheless, the fault models, in which give different slip amplitudes, proportionally lead to different amplitude in the predicted gravity changes.Keywords: undersea earthquake, GRACE observation, gravity change, dislocation model, slip distribution
Procedia PDF Downloads 355614 Some Accuracy Related Aspects in Two-Fluid Hydrodynamic Sub-Grid Modeling of Gas-Solid Riser Flows
Authors: Joseph Mouallem, Seyed Reza Amini Niaki, Norman Chavez-Cussy, Christian Costa Milioli, Fernando Eduardo Milioli
Abstract:
Sub-grid closures for filtered two-fluid models (fTFM) useful in large scale simulations (LSS) of riser flows can be derived from highly resolved simulations (HRS) with microscopic two-fluid modeling (mTFM). Accurate sub-grid closures require accurate mTFM formulations as well as accurate correlation of relevant filtered parameters to suitable independent variables. This article deals with both of those issues. The accuracy of mTFM is touched by assessing the impact of gas sub-grid turbulence over HRS filtered predictions. A gas turbulence alike effect is artificially inserted by means of a stochastic forcing procedure implemented in the physical space over the momentum conservation equation of the gas phase. The correlation issue is touched by introducing a three-filtered variable correlation analysis (three-marker analysis) performed under a variety of different macro-scale conditions typical or risers. While the more elaborated correlation procedure clearly improved accuracy, accounting for gas sub-grid turbulence had no significant impact over predictions.Keywords: fluidization, gas-particle flow, two-fluid model, sub-grid models, filtered closures
Procedia PDF Downloads 124613 A Predictive Analytics Approach to Project Management: Reducing Project Failures in Web and Software Development Projects
Authors: Tazeen Fatima
Abstract:
Use of project management in web & software development projects is very significant. It has been observed that even with the application of effective project management, projects usually do not complete their lifecycle and fail. To minimize these failures, key performance indicators have been introduced in previous studies to counter project failures. However, there are always gaps and problems in the KPIs identified. Despite of incessant efforts at technical and managerial levels, projects still fail. There is no substantial approach to identify and avoid these failures in the very beginning of the project lifecycle. In this study, we aim to answer these research problems by analyzing the concept of predictive analytics which is a specialized technology and is very easy to use in this era of computation. Project organizations can use data gathering, compute power, and modern tools to render efficient Predictions. The research aims to identify such a predictive analytics approach. The core objective of the study was to reduce failures and introduce effective implementation of project management principles. Existing predictive analytics methodologies, tools and solution providers were also analyzed. Relevant data was gathered from projects and was analyzed via predictive techniques to make predictions well advance in time to render effective project management in web & software development industry.Keywords: project management, predictive analytics, predictive analytics methodology, project failures
Procedia PDF Downloads 347612 An Application for Risk of Crime Prediction Using Machine Learning
Authors: Luis Fonseca, Filipe Cabral Pinto, Susana Sargento
Abstract:
The increase of the world population, especially in large urban centers, has resulted in new challenges particularly with the control and optimization of public safety. Thus, in the present work, a solution is proposed for the prediction of criminal occurrences in a city based on historical data of incidents and demographic information. The entire research and implementation will be presented start with the data collection from its original source, the treatment and transformations applied to them, choice and the evaluation and implementation of the Machine Learning model up to the application layer. Classification models will be implemented to predict criminal risk for a given time interval and location. Machine Learning algorithms such as Random Forest, Neural Networks, K-Nearest Neighbors and Logistic Regression will be used to predict occurrences, and their performance will be compared according to the data processing and transformation used. The results show that the use of Machine Learning techniques helps to anticipate criminal occurrences, which contributed to the reinforcement of public security. Finally, the models were implemented on a platform that will provide an API to enable other entities to make requests for predictions in real-time. An application will also be presented where it is possible to show criminal predictions visually.Keywords: crime prediction, machine learning, public safety, smart city
Procedia PDF Downloads 111611 Solar Radiation Time Series Prediction
Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs
Abstract:
A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled DNI field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.Keywords: artificial neural networks, resilient propagation, solar radiation, time series forecasting
Procedia PDF Downloads 384610 Biomarkers for Rectal Adenocarcinoma Identified by Lipidomic and Bioinformatic
Authors: Patricia O. Carvalho, Marcia C. F. Messias, Laura Credidio, Carlos A. R. Martinez
Abstract:
Lipidomic strategy can provide important information regarding cancer pathogenesis mechanisms and could reveal new biomarkers to enable early diagnosis of rectal adenocarcinoma (RAC). This study set out to evaluate lipoperoxidation biomarkers, and lipidomic signature by gas chromatography (GC) and electrospray ionization-qToF-mass spectrometry (ESI-qToF-MS) combined with multivariate data analysis in plasma from 23 RAC patients (early- or advanced-stages cancer) and 18 healthy controls. The most abundant ions identified in the RAC patients were those of phosphatidylcholine (PC) and phosphatidylethanolamine (PE) while those of lisophosphatidylcholine (LPC), identified as LPC (16:1), LPC (18:1) and LPC (18:2), were down-regulated. LPC plasmalogen containing palmitoleic acid (LPC (P-16:1)), with highest VIP score, showed a low tendency in the cancer patients. Malondialdehyde plasma levels were higher in patients with advanced cancer (III/IV stages) than in the early stages groups and the healthy group (p<0.05). No differences in F2-isoprostane levels were observed between these groups. This study shows that the reduction in plasma levels of LPC plasmalogens associated to an increase in MDA levels may indicate increased oxidative stress in these patients and identify the metabolite LPC (P-16:1) as new biomarkers for RAC.Keywords: biomarkers, lipidomic, plasmalogen, rectal adenocarcinoma
Procedia PDF Downloads 230609 Modeling of Ductile Fracture Using Stress-Modified Critical Strain Criterion for Typical Pressure Vessel Steel
Authors: Carlos Cuenca, Diego Sarzosa
Abstract:
Ductile fracture occurs by the mechanism of void nucleation, void growth and coalescence. Potential sites for initiation are second phase particles or non-metallic inclusions. Modelling of ductile damage at the microscopic level is very difficult and complex task for engineers. Therefore, conservative predictions of ductile failure using simple models are necessary during the design and optimization of critical structures like pressure vessels and pipelines. Nowadays, it is well known that the initiation phase is strongly influenced by the stress triaxiality and plastic deformation at the microscopic level. Thus, a simple model used to study the ductile failure under multiaxial stress condition is the Stress Modified Critical Strain (SMCS) approach. Ductile rupture has been study for a structural steel under different stress triaxiality conditions using the SMCS method. Experimental tests are carried out to characterize the relation between stress triaxiality and equivalent plastic strain by notched round bars. After calibration of the plasticity and damage properties, predictions are made for low constraint bending specimens with and without side grooves. Stress/strain fields evolution are compared between the different geometries. Advantages and disadvantages of the SMCS methodology are discussed.Keywords: damage, SMSC, SEB, steel, failure
Procedia PDF Downloads 297