Search results for: root uptake models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8343

Search results for: root uptake models

7563 Probabilistic Models to Evaluate Seismic Liquefaction In Gravelly Soil Using Dynamic Penetration Test and Shear Wave Velocity

Authors: Nima Pirhadi, Shao Yong Bo, Xusheng Wan, Jianguo Lu, Jilei Hu

Abstract:

Although gravels and gravelly soils are assumed to be non-liquefiable because of high conductivity and small modulus; however, the occurrence of this phenomenon in some historical earthquakes, especially recently earthquakes during 2008 Wenchuan, Mw= 7.9, 2014 Cephalonia, Greece, Mw= 6.1 and 2016, Kaikoura, New Zealand, Mw = 7.8, has been promoted the essential consideration to evaluate risk assessment and hazard analysis of seismic gravelly soil liquefaction. Due to the limitation in sampling and laboratory testing of this type of soil, in situ tests and site exploration of case histories are the most accepted procedures. Of all in situ tests, dynamic penetration test (DPT), Which is well known as the Chinese dynamic penetration test, and shear wave velocity (Vs) test, have been demonstrated high performance to evaluate seismic gravelly soil liquefaction. However, the lack of a sufficient number of case histories provides an essential limitation for developing new models. This study at first investigates recent earthquakes that caused liquefaction in gravelly soils to collect new data. Then, it adds these data to the available literature’s dataset to extend them and finally develops new models to assess seismic gravelly soil liquefaction. To validate the presented models, their results are compared to extra available models. The results show the reasonable performance of the proposed models and the critical effect of gravel content (GC)% on the assessment.

Keywords: liquefaction, gravel, dynamic penetration test, shear wave velocity

Procedia PDF Downloads 201
7562 Predictive Models for Compressive Strength of High Performance Fly Ash Cement Concrete for Pavements

Authors: S. M. Gupta, Vanita Aggarwal, Som Nath Sachdeva

Abstract:

The work reported through this paper is an experimental work conducted on High Performance Concrete (HPC) with super plasticizer with the aim to develop some models suitable for prediction of compressive strength of HPC mixes. In this study, the effect of varying proportions of fly ash (0% to 50% at 10% increment) on compressive strength of high performance concrete has been evaluated. The mix designs studied were M30, M40 and M50 to compare the effect of fly ash addition on the properties of these concrete mixes. In all eighteen concrete mixes have been designed, three as conventional concretes for three grades under discussion and fifteen as HPC with fly ash with varying percentages of fly ash. The concrete mix designing has been done in accordance with Indian standard recommended guidelines i.e. IS: 10262. All the concrete mixes have been studied in terms of compressive strength at 7 days, 28 days, 90 days and 365 days. All the materials used have been kept same throughout the study to get a perfect comparison of values of results. The models for compressive strength prediction have been developed using Linear Regression method (LR), Artificial Neural Network (ANN) and Leave One Out Validation (LOOV) methods.

Keywords: high performance concrete, fly ash, concrete mixes, compressive strength, strength prediction models, linear regression, ANN

Procedia PDF Downloads 445
7561 Evaluating the Suitability and Performance of Dynamic Modulus Predictive Models for North Dakota’s Asphalt Mixtures

Authors: Duncan Oteki, Andebut Yeneneh, Daba Gedafa, Nabil Suleiman

Abstract:

Most agencies lack the equipment required to measure the dynamic modulus (|E*|) of asphalt mixtures, necessitating the need to use predictive models. This study compared measured |E*| values for nine North Dakota asphalt mixes using the original Witczak, modified Witczak, and Hirsch models. The influence of temperature on the |E*| models was investigated, and Pavement ME simulations were conducted using measured |E*| and predictions from the most accurate |E*| model. The results revealed that the original Witczak model yielded the lowest Se/Sy and highest R² values, indicating the lowest bias and highest accuracy, while the poorest overall performance was exhibited by the Hirsch model. Using predicted |E*| as inputs in the Pavement ME generated conservative distress predictions compared to using measured |E*|. The original Witczak model was recommended for predicting |E*| for low-reliability pavements in North Dakota.

Keywords: asphalt mixture, binder, dynamic modulus, MEPDG, pavement ME, performance, prediction

Procedia PDF Downloads 48
7560 Identifying and Optimizing the Critical Excipients in Moisture Activated Dry Granulation Process for Two Anti TB Drugs of Different Aqueous Solubilities

Authors: K. Srujana, Vinay U. Rao, M. Sudhakar

Abstract:

Isoniazide (INH) a freely water soluble and pyrazinamide (Z) a practically water insoluble first line anti tubercular (TB) drugs were identified as candidates for optimizing the Moisture Activated Dry Granulation (MADG) process. The work focuses on identifying the effect of binder type and concentration as well as the effect of magnesium stearate level on critical quality attributes of Disintegration time (DT) and in vitro dissolution test when the tablets are processed by the MADG process. Also, the level of the drug concentration, binder concentration and fluid addition during the agglomeration stage of the MADG process was evaluated and optimized. For INH, it was identified that for tablets with HPMC as binder at both 2% w/w and 5% w/w level and Magnesium stearate upto 1%w/w as lubrication the DT is within 1 minute and the dissolution rate is the fastest (> 80% in 15 minutes) as compared to when PVP or pregelatinized starch is used as binder. Regarding the process, fast disintegrating and rapidly dissolving tablets are obtained when the level of drug, binder and fluid uptake in agglomeration stage is 25% w/w 0% w/w binder and 0.033%. w/w. At the other 2 levels of these three ingredients, the DT is significantly impacted and dissolution is also slower. For pyrazinamide,it was identified that for the tablets with 2% w/w level of each of PVP as binder and Cross Caramellose Sodium disintegrant the DT is within 2 minutes and the dissolution rate is the fastest(>80 in 15 minutes)as compared to when HPMC or pregelatinized starch is used as binder. This may be attributed to the fact that PVP may be acting as a solubilizer for the practically insoluble Pyrazinamide. Regarding the process,fast dispersing and rapidly disintegrating tablets are obtained when the level of drug, binder and fluid uptake in agglomeration stage is 10% w/w,25% w/w binder and 1% w/w.At the other 2 levels of these three ingredients, the DT is significantly impacted and dissolution is comparatively slower and less complete.

Keywords: agglomeration stage, isoniazide, MADG, moisture distribution stage, pyrazinamide

Procedia PDF Downloads 239
7559 Plant Microbiota of Coastal Halophyte Salicornia Ramossisima

Authors: Isabel N. Sierra-Garcia, Maria J. Ferreira, Sandro Figuereido, Newton Gomes, Helena Silva, Angela Cunha

Abstract:

Plant-associated microbial communities are considered crucial in the adaptation of halophytes to coastal environments. The plant microbiota can be horizontally acquired from the environment or vertically transmitted from generation to generation via seeds. Recruiting of the microbial communities by the plant is affected by geographical location, soil source, host genotype, and cultivation practice. There is limited knowledge reported on the microbial communities in halophytes the influence of biotic and abiotic factors. In this work, the microbiota associated with the halophyte Salicornia ramosissima was investigated to determine whether the structure of bacterial communities is influenced by host genotype or soil source. For this purpose, two contrasting sites where S. ramosissima is established in the estuarine system of the Ria de Aveiro were investigated. One site corresponds to a natural salt marsh where S. ramosissima plants are present (wild plants), and the other site is a former salt pan that nowadays are subjected to intensive crop production of S. ramosissima (crop plants). Bacterial communities from the rhizosphere, seeds and root endosphere of S. ramossisima from both sites were investigated by sequencing bacterial 16S rRNA gene using the Illumina MiSeq platform. The analysis of the sequences showed that the three plant-associated compartments, rhizosphere, root endosphere, and seed endosphere, harbor distinct microbiomes. However, bacterial richness and diversity were higher in seeds of wild plants, followed by rhizosphere in both sites, while seeds in the crop site had the lowest diversity. Beta diversity measures indicated that bacterial communities in root endosphere and seeds were more similar in both wild and crop plants in contrast to rhizospheres that differed by local, indicating that the recruitment of the similar bacterial communities by the plant genotype is active in regard to the site. Moreover, bacterial communities from the root endosphere and rhizosphere were phylogenetically more similar in both sites, but the phylogenetic composition of seeds in wild and crop sites was distinct. These results indicate that cultivation practices affect the seed microbiome. However, minimal vertical transmission of bacteria from seeds to adult plants is expected. Seeds from the crop site showed higher abundances of Kushneria and Zunongwangia genera. Bacterial members of the classes Alphaprotebacteria and Bacteroidia were the most ubiquitous across sites and compartments and might encompass members of the core microbiome. These findings indicate that bacterial communities associated with S. ramosissima are more influenced by host genotype rather than local abiotic factors or cultivation practices. This study provides a better understanding of the composition of the plant microbiota in S. ramosissima , which is essential to predict the interactions between plant and associated microbial communities and their effects on plant health. This knowledge is useful to the manipulations of these microbial communities to enhance the health and productivity of this commercially important plant.

Keywords: halophytes, plant microbiome, Salicornia ramosissima, agriculture

Procedia PDF Downloads 169
7558 Domain specific Ontology-Based Knowledge Extraction Using R-GNN and Large Language Models

Authors: Andrey Khalov

Abstract:

The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.

Keywords: ontology mapping, R-GNN, knowledge extraction, large language models, NER, knowlege graph

Procedia PDF Downloads 16
7557 Circular Economy Maturity Models: A Systematic Literature Review

Authors: Dennis Kreutzer, Sarah Müller-Abdelrazeq, Ingrid Isenhardt

Abstract:

Resource scarcity, energy transition and the planned climate neutrality pose enormous challenges for manufacturing companies. In order to achieve these goals and a holistic sustainable development, the European Union has listed the circular economy as part of the Circular Economy Action Plan. In addition to a reduction in resource consumption, reduced emissions of greenhouse gases and a reduced volume of waste, the principles of the circular economy also offer enormous economic potential for companies, such as the generation of new circular business models. However, many manufacturing companies, especially small and medium-sized enterprises, do not have the necessary capacity to plan their transformation. They need support and strategies on the path to circular transformation, because this change affects not only production but also the entire company. Maturity models offer an approach, as they enable companies to determine the current status of their transformation processes. In addition, companies can use the models to identify transformation strategies and thus promote the transformation process. While maturity models are established in other areas, e.g. IT or project management, only a few circular economy maturity models can be found in the scientific literature. The aim of this paper is to analyse the identified maturity models of the circular economy through a systematic literature review (SLR) and, besides other aspects, to check their completeness as well as their quality. Since the terms "maturity model" and "readiness model" are often used to assess the transformation process, this paper considers both types of models to provide a more comprehensive result. For this purpose, circular economy maturity models at the company (micro) level were identified from the literature, compared, and analysed with regard to their theoretical and methodological structure. A specific focus was placed, on the one hand, on the analysis of the business units considered in the respective models and, on the other hand, on the underlying metrics and indicators in order to determine the individual maturity level of the entire company. The results of the literature review show, for instance, a significant difference in the holism of their assessment framework. Only a few models include the entire company with supporting areas outside the value-creating core process, e.g. strategy and vision. Additionally, there are large differences in the number and type of indicators as well as their metrics. For example, most models often use subjective indicators and very few objective indicators in their surveys. It was also found that there are rarely well-founded thresholds between the levels. Based on the generated results, concrete ideas and proposals for a research agenda in the field of circular economy maturity models are made.

Keywords: maturity model, circular economy, transformation, metric, assessment

Procedia PDF Downloads 114
7556 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 309
7555 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer

Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu

Abstract:

Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.

Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer

Procedia PDF Downloads 60
7554 PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria

Authors: Snezhana G. Gocheva-Ilieva, Maya P. Stoimenova

Abstract:

Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.

Keywords: cross-validation, decision tree, lagged variables, short-term forecasting

Procedia PDF Downloads 194
7553 JaCoText: A Pretrained Model for Java Code-Text Generation

Authors: Jessica Lopez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri

Abstract:

Pretrained transformer-based models have shown high performance in natural language generation tasks. However, a new wave of interest has surged: automatic programming language code generation. This task consists of translating natural language instructions to a source code. Despite the fact that well-known pre-trained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformer neural network. It aims to generate java source code from natural language text. JaCoText leverages the advantages of both natural language and code generation models. More specifically, we study some findings from state of the art and use them to (1) initialize our model from powerful pre-trained models, (2) explore additional pretraining on our java dataset, (3) lead experiments combining the unimodal and bimodal data in training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.

Keywords: java code generation, natural language processing, sequence-to-sequence models, transformer neural networks

Procedia PDF Downloads 285
7552 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 135
7551 Modeling of Tool Flank Wear in Finish Hard Turning of AISI D2 Using Genetic Programming

Authors: V. Pourmostaghimi, M. Zadshakoyan

Abstract:

Efficiency and productivity of the finish hard turning can be enhanced impressively by utilizing accurate predictive models for cutting tool wear. However, the ability of genetic programming in presenting an accurate analytical model is a notable characteristic which makes it more applicable than other predictive modeling methods. In this paper, the genetic equation for modeling of tool flank wear is developed with the use of the experimentally measured flank wear values and genetic programming during finish turning of hardened AISI D2. Series of tests were conducted over a range of cutting parameters and the values of tool flank wear were measured. On the basis of obtained results, genetic model presenting connection between cutting parameters and tool flank wear were extracted. The accuracy of the genetically obtained model was assessed by using two statistical measures, which were root mean square error (RMSE) and coefficient of determination (R²). Evaluation results revealed that presented genetic model predicted flank wear over the study area accurately (R² = 0.9902 and RMSE = 0.0102). These results allow concluding that the proposed genetic equation corresponds well with experimental data and can be implemented in real industrial applications.

Keywords: cutting parameters, flank wear, genetic programming, hard turning

Procedia PDF Downloads 179
7550 Human Resource Utilization Models for Graceful Ageing

Authors: Chuang-Chun Chiou

Abstract:

In this study, a systematic framework of graceful ageing has been used to explore the possible human resource utilization models for graceful ageing purpose. This framework is based on the Chinese culture. We call ‘Nine-old’ target. They are ageing gracefully with feeding, accomplishment, usefulness, learning, entertainment, care, protection, dignity, and termination. This study is focused on two areas: accomplishment and usefulness. We exam the current practices of initiatives and laws of promoting labor participation. That is to focus on how to increase Labor Force Participation Rate of the middle aged as well as the elderly and try to promote the elderly to achieve graceful ageing. Then we present the possible models that support graceful ageing.

Keywords: human resource utilization model, labor participation, graceful ageing, employment

Procedia PDF Downloads 390
7549 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum

Authors: Abdulrahman Sumayli, Saad M. AlShahrani

Abstract:

For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectively

Keywords: temperature, pressure variations, machine learning, oil treatment

Procedia PDF Downloads 69
7548 Environmental Modeling of Storm Water Channels

Authors: L. Grinis

Abstract:

Turbulent flow in complex geometries receives considerable attention due to its importance in many engineering applications. It has been the subject of interest for many researchers. Some of these interests include the design of storm water channels. The design of these channels requires testing through physical models. The main practical limitation of physical models is the so called “scale effect”, that is, the fact that in many cases only primary physical mechanisms can be correctly represented, while secondary mechanisms are often distorted. These observations form the basis of our study, which centered on problems associated with the design of storm water channels near the Dead Sea, in Israel. To help reach a final design decision we used different physical models. Our research showed good coincidence with the results of laboratory tests and theoretical calculations, and allowed us to study different effects of fluid flow in an open channel. We determined that problems of this nature cannot be solved only by means of theoretical calculation and computer simulation. This study demonstrates the use of physical models to help resolve very complicated problems of fluid flow through baffles and similar structures. The study applies these models and observations to different construction and multiphase water flows, among them, those that include sand and stone particles, a significant attempt to bring to the testing laboratory a closer association with reality.

Keywords: open channel, physical modeling, baffles, turbulent flow

Procedia PDF Downloads 284
7547 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models

Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo

Abstract:

There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.

Keywords: chlorodifluoromethane (HCFC-142b), ozone, least squares method, regression models

Procedia PDF Downloads 124
7546 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 144
7545 Currency Exchange Rate Forecasts Using Quantile Regression

Authors: Yuzhi Cai

Abstract:

In this paper, we discuss a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. Together with a combining forecasts technique, we then predict USD to GBP currency exchange rates. Combined forecasts contain all the information captured by the fitted QAR models at different quantile levels and are therefore better than those obtained from individual models. Our results show that an unequally weighted combining method performs better than other forecasting methodology. We found that a median AR model can perform well in point forecasting when the predictive density functions are symmetric. However, in practice, using the median AR model alone may involve the loss of information about the data captured by other QAR models. We recommend that combined forecasts should be used whenever possible.

Keywords: combining forecasts, MCMC, predictive density functions, quantile forecasting, quantile modelling

Procedia PDF Downloads 256
7544 The Impact of Introspective Models on Software Engineering

Authors: Rajneekant Bachan, Dhanush Vijay

Abstract:

The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.

Keywords: software engineering, architectures, introspective models, operating systems

Procedia PDF Downloads 538
7543 Design and Development of an Algorithm to Predict Fluctuations of Currency Rates

Authors: Nuwan Kuruwitaarachchi, M. K. M. Peiris, C. N. Madawala, K. M. A. R. Perera, V. U. N Perera

Abstract:

Dealing with businesses with the foreign market always took a special place in a country’s economy. Political and social factors came into play making currency rate changes fluctuate rapidly. Currency rate prediction has become an important factor for larger international businesses since large amounts of money exchanged between countries. This research focuses on comparing the accuracy of mainly three models; Autoregressive Integrated Moving Average (ARIMA), Artificial Neural Networks(ANN) and Support Vector Machines(SVM). series of data import, export, USD currency exchange rate respect to LKR has been selected for training using above mentioned algorithms. After training the data set and comparing each algorithm, it was able to see that prediction in SVM performed better than other models. It was improved more by combining SVM and SVR models together.

Keywords: ARIMA, ANN, FFNN, RMSE, SVM, SVR

Procedia PDF Downloads 212
7542 Arsenic Speciation in Cicer arietinum: A Terrestrial Legume That Contains Organoarsenic Species

Authors: Anjana Sagar

Abstract:

Arsenic poisoned ground water is a major concern in South Asia. The arsenic enters the food chain not only through drinking but also by using arsenic polluted water for irrigation. Arsenic is highly toxic in its inorganic forms; however, organic forms of arsenic are comparatively less toxic. In terrestrial plants, inorganic form of arsenic is predominantly found; however, we found that significant proportion of organic arsenic was present in root and shoot of a staple legume, chickpea (Cicer arientinum L) plants. Chickpea plants were raised in pot culture on soils spiked with arsenic ranging from 0-70 mg arsenate per Kg soil. Total arsenic concentrations of chickpea shoots and roots were determined by inductively coupled plasma-mass-spectrometry (ICP-MS) ranging from 0.76 to 20.26, and 2.09 to 16.43 µg g⁻¹ dry weight, respectively. Information on arsenic species was acquired by methanol/water extraction method, with arsenic species being analyzed by high-performance liquid chromatography (HPLC) coupled with ICP-MS. Dimethylarsinic acid (DMA) was the only organic arsenic species found in amount from 0.02 to 3.16 % of total arsenic shoot concentration and 0 to 6.93 % of total arsenic root concentration, respectively. To investigate the source of the organic arsenic in chickpea plants, arsenic species in the rhizosphere of soils of plants were also examined. The absence of organic arsenic in soils would suggest the possibility of formation of DMA in plants. The present investigation provides useful information for better understanding of distribution of arsenic species in terrestrial legume plants.

Keywords: arsenic, arsenic speciation, dimethylarsinic acid, organoarsenic

Procedia PDF Downloads 139
7541 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 115
7540 Modelling Phase Transformations in Zircaloy-4 Fuel Cladding under Transient Heating Rates

Authors: Jefri Draup, Antoine Ambard, Chi-Toan Nguyen

Abstract:

Zirconium alloys exhibit solid-state phase transformations under thermal loading. These can lead to a significant evolution of the microstructure and associated mechanical properties of materials used in nuclear fuel cladding structures. Therefore, the ability to capture effects of phase transformation on the material constitutive behavior is of interest during conditions of severe transient thermal loading. Whilst typical Avrami, or Johnson-Mehl-Avrami-Kolmogorov (JMAK), type models for phase transformations have been shown to have a good correlation with the behavior of Zircaloy-4 under constant heating rates, the effects of variable and fast heating rates are not fully explored. The present study utilises the results of in-situ high energy synchrotron X-ray diffraction (SXRD) measurements in order to validate the phase transformation models for Zircaloy-4 under fast variable heating rates. These models are used to assess the performance of fuel cladding structures under loss of coolant accident (LOCA) scenarios. The results indicate that simple Avrami type models can provide a reasonable indication of the phase distribution in experimental test specimens under variable fast thermal loading. However, the accuracy of these models deteriorates under the faster heating regimes, i.e., 100Cs⁻¹. The studies highlight areas for improvement of simple Avrami type models, such as the inclusion of temperature rate dependence of the JMAK n-exponent.

Keywords: accident, fuel, modelling, zirconium

Procedia PDF Downloads 142
7539 A Pedagogical Case Study on Consumer Decision Making Models: A Selection of Smart Phone Apps

Authors: Yong Bum Shin

Abstract:

This case focuses on Weighted additive difference, Conjunctive, Disjunctive, and Elimination by aspects methodologies in consumer decision-making models and the Simple additive weighting (SAW) approach in the multi-criteria decision-making (MCDM) area. Most decision-making models illustrate that the rank reversal phenomenon is unpreventable. This paper presents that rank reversal occurs in popular managerial methods such as Weighted Additive Difference (WAD), Conjunctive Method, Disjunctive Method, Elimination by Aspects (EBA) and MCDM methods as well as such as the Simple Additive Weighting (SAW) and finally Unified Commensurate Multiple (UCM) models which successfully addresses these rank reversal problems in most popular MCDM methods in decision-making area.

Keywords: multiple criteria decision making, rank inconsistency, unified commensurate multiple, analytic hierarchy process

Procedia PDF Downloads 81
7538 A Comparative Evaluation of the SIR and SEIZ Epidemiological Models to Describe the Diffusion Characteristics of COVID-19 Polarizing Viewpoints on Online

Authors: Maryam Maleki, Esther Mead, Mohammad Arani, Nitin Agarwal

Abstract:

This study is conducted to examine how opposing viewpoints related to COVID-19 were diffused on Twitter. To accomplish this, six datasets using two epidemiological models, SIR (Susceptible, Infected, Recovered) and SEIZ (Susceptible, Exposed, Infected, Skeptics), were analyzed. The six datasets were chosen because they represent opposing viewpoints on the COVID-19 pandemic. Three of the datasets contain anti-subject hashtags, while the other three contain pro-subject hashtags. The time frame for all datasets is three years, starting from January 2020 to December 2022. The findings revealed that while both models were effective in evaluating the propagation trends of these polarizing viewpoints, the SEIZ model was more accurate with a relatively lower error rate (6.7%) compared to the SIR model (17.3%). Additionally, the relative error for both models was lower for anti-subject hashtags compared to pro-subject hashtags. By leveraging epidemiological models, insights into the propagation trends of polarizing viewpoints on Twitter were gained. This study paves the way for the development of methods to prevent the spread of ideas that lack scientific evidence while promoting the dissemination of scientifically backed ideas.

Keywords: mathematical modeling, epidemiological model, seiz model, sir model, covid-19, twitter, social network analysis, social contagion

Procedia PDF Downloads 62
7537 Comparative Sustainability Performance Analysis of Australian Companies Using Composite Measures

Authors: Ramona Zharfpeykan, Paul Rouse

Abstract:

Organizational sustainability is important to both organizations themselves and their stakeholders. Despite its increasing popularity and increasing numbers of organizations reporting sustainability, research on evaluating and comparing the sustainability performance of companies is limited. The aim of this study was to develop models to measure sustainability performance for both cross-sectional and longitudinal comparisons across companies in the same or different industries. A secondary aim was to see if sustainability reports can be used to evaluate sustainability performance. The study used both a content analysis of Australian sustainability reports in mining and metals and financial services for 2011-2014 and a survey of Australian and New Zealand organizations. Two methods ranging from a composite index using uniform weights to data envelopment analysis (DEA) were employed to analyze the data and develop the models. The results show strong statistically significant relationships between the developed models, which suggests that each model provides a consistent, systematic and reasonably robust analysis. The results of the models show that for both industries, companies that had sustainability scores above or below the industry average stayed almost the same during the study period. These indices and models can be used by companies to evaluate their sustainability performance and compare it with previous years, or with other companies in the same or different industries. These methods can also be used by various stakeholders and sustainability ranking companies such as the Global Reporting Initiative (GRI).

Keywords: data envelopment analysis, sustainability, sustainability performance measurement system, sustainability performance index, global reporting initiative

Procedia PDF Downloads 181
7536 A Study of High Viscosity Oil-Gas Slug Flow Using Gamma Densitometer

Authors: Y. Baba, A. Archibong-Eso, H. Yeung

Abstract:

Experimental study of high viscosity oil-gas flows in horizontal pipelines published in literature has indicated that hydrodynamic slug flow is the dominant flow pattern observed. Investigations have shown that hydrodynamic slugging brings about high instabilities in pressure that can damage production facilities thereby making it inherent to study high viscous slug flow regime so as to improve the understanding of its flow dynamics. Most slug flow models used in the petroleum industry for the design of pipelines together with their closure relationships were formulated based on observations of low viscosity liquid-gas flows. New experimental investigations and data are therefore required to validate these models. In cases where these models underperform, improving upon or building new predictive models and correlations will also depend on the new experimental dataset and further understanding of the flow dynamics in high viscous oil-gas flows. In this study conducted at the Flow laboratory, Oil and Gas Engineering Centre of Cranfield University, slug flow variables such as pressure gradient, mean liquid holdup, frequency and slug length for oil viscosity ranging from 1..0 – 5.5 Pa.s are experimentally investigated and analysed. The study was carried out in a 0.076m ID pipe, two fast sampling gamma densitometer and pressure transducers (differential and point) were used to obtain experimental measurements. Comparison of the measured slug flow parameters to the existing slug flow prediction models available in the literature showed disagreement with high viscosity experimental data thus highlighting the importance of building new predictive models and correlations.

Keywords: gamma densitometer, mean liquid holdup, pressure gradient, slug frequency and slug length

Procedia PDF Downloads 329
7535 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model

Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim

Abstract:

Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).

Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph

Procedia PDF Downloads 280
7534 Wind Fragility of Window Glass in 10-Story Apartment with Two Different Window Models

Authors: Viriyavudh Sim, WooYoung Jung

Abstract:

Damage due to high wind is not limited to load resistance components such as beam and column. The majority of damage is due to breach in the building envelope such as broken roof, window, and door. In this paper, wind fragility of window glass in residential apartment was determined to compare the difference between two window configuration models. Monte Carlo Simulation method had been used to derive damage data and analytical fragilities were constructed. Fragility of window system showed that window located in leeward wall had higher probability of failure, especially those close to the edge of structure. Between the two window models, Model 2 had higher probability of failure, this was due to the number of panel in this configuration.

Keywords: wind fragility, glass window, high rise building, wind disaster

Procedia PDF Downloads 257