Search results for: accurate data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26491

Search results for: accurate data

25651 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana

Authors: Ayesha Sanjana Kawser Parsha

Abstract:

S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.

Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score

Procedia PDF Downloads 72
25650 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method

Procedia PDF Downloads 198
25649 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 435
25648 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 91
25647 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 155
25646 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 591
25645 On Voice in English: An Awareness Raising Attempt on Passive Voice

Authors: Meral Melek Unver

Abstract:

This paper aims to explore ways to help English as a Foreign Language (EFL) learners notice and revise voice in English and raise their awareness of when and how to use active and passive voice to convey meaning in their written and spoken work. Because passive voice is commonly preferred in certain genres such as academic essays and news reports, despite the current trends promoting active voice, it is essential for learners to be fully aware of the meaning, use and form of passive voice to better communicate. The participants in the study are 22 EFL learners taking a one-year intensive English course at a university, who will receive English medium education (EMI) in their departmental studies in the following academic year. Data from students’ written and oral work was collected over a four-week period and the misuse or inaccurate use of passive voice was identified. The analysis of the data proved that they failed to make sensible decisions about when and how to use passive voice partly because the differences between their mother tongue and English and because they were not aware of the fact that active and passive voice would not alternate all the time. To overcome this, a Test-Teach-Test shape lesson, as opposed to a Present-Practice-Produce shape lesson, was designed and implemented to raise their awareness of the decisions they needed to make in choosing the voice and help them notice the meaning and use of passive voice through concept checking questions. The results first suggested that awareness raising activities on the meaning and use of voice in English would be beneficial in having accurate and meaningful outcomes from students. Also, helping students notice and renotice passive voice through carefully designed activities would help them internalize the use and form of it. As a result of the study, a number of activities are suggested to revise and notice passive voice as well as a short questionnaire to help EFL teachers to self-reflect on their teaching.

Keywords: voice in English, test-teach-test, passive voice, English language teaching

Procedia PDF Downloads 219
25644 A NoSQL Based Approach for Real-Time Managing of Robotics's Data

Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir

Abstract:

This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.

Keywords: NoSQL databases, database management systems, robotics, big data

Procedia PDF Downloads 352
25643 Long- and Short-Term Impacts of COVID-19 and Gold Price on Price Volatility: A Comparative Study of MIDAS and GARCH-MIDAS Models for USA Crude Oil

Authors: Samir K. Safi

Abstract:

The purpose of this study was to compare the performance of two types of models, namely MIDAS and MIDAS-GARCH, in predicting the volatility of crude oil returns based on gold price returns and the COVID-19 pandemic. The study aimed to identify which model would provide more accurate short-term and long-term predictions and which model would perform better in handling the increased volatility caused by the pandemic. The findings of the study revealed that the MIDAS model performed better in predicting short-term and long-term volatility before the pandemic, while the MIDAS-GARCH model performed significantly better in handling the increased volatility caused by the pandemic. The study highlights the importance of selecting appropriate models to handle the complexities of real-world data and shows that the choice of model can significantly impact the accuracy of predictions. The practical implications of model selection and exploring potential methodological adjustments for future research will be highlighted and discussed.

Keywords: GARCH-MIDAS, MIDAS, crude oil, gold, COVID-19, volatility

Procedia PDF Downloads 64
25642 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 726
25641 Process Mining as an Ecosystem Platform to Mitigate a Deficiency of Processes Modelling

Authors: Yusra Abdulsalam Alqamati, Ahmed Alkilany

Abstract:

The teaching staff is a distinct group whose impact is on the educational process and which plays an important role in enhancing the quality of the academic education process. To improve the management effectiveness of the academy, the Teaching Staff Management System (TSMS) proposes that all teacher processes be digitized. Since the BPMN approach can accurately describe the processes, it lacks a clear picture of the process flow map, something that the process mining approach has, which is extracting information from event logs for discovery, monitoring, and model enhancement. Therefore, these two methodologies were combined to create the most accurate representation of system operations, the ability to extract data records and mining processes, recreate them in the form of a Petri net, and then generate them in a BPMN model for a more in-depth view of process flow. Additionally, the TSMS processes will be orchestrated to handle all requests in a guaranteed small-time manner thanks to the integration of the Google Cloud Platform (GCP), the BPM engine, and allowing business owners to take part throughout the entire TSMS project development lifecycle.

Keywords: process mining, BPM, business process model and notation, Petri net, teaching staff, Google Cloud Platform

Procedia PDF Downloads 137
25640 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 188
25639 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data

Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim

Abstract:

Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.

Keywords: activity pattern, data fusion, smart-card, XGboost

Procedia PDF Downloads 244
25638 Electrochemical Biosensor for the Detection of Botrytis spp. in Temperate Legume Crops

Authors: Marzia Bilkiss, Muhammad J. A. Shiddiky, Mostafa K. Masud, Prabhakaran Sambasivam, Ido Bar, Jeremy Brownlie, Rebecca Ford

Abstract:

A greater achievement in the Integrated Disease Management (IDM) to prevent the loss would result from early diagnosis and quantitation of the causal pathogen species for accurate and timely disease control. This could significantly reduce costs to the growers and reduce any flow on impacts to the environment from excessive chemical spraying. Necrotrophic fungal disease botrytis grey mould, caused by Botrytis cinerea and Botrytis fabae, significantly reduce temperate legume yield and grain quality during favourable environmental condition in Australia and worldwide. Several immunogenic and molecular probe-type protocols have been developed for their diagnosis, but these have varying levels of species-specificity, sensitivity, and consequent usefulness within the paddock. To substantially improve speed, accuracy, and sensitivity, advanced nanoparticle-based biosensor approaches have been developed. For this, two sets of primers were designed for both Botrytis cinerea and Botrytis fabae which have shown the species specificity with initial sensitivity of two genomic copies/µl in pure fungal backgrounds using multiplexed quantitative PCR. During further validation, quantitative PCR detected 100 spores on artificially infected legume leaves. Simultaneously an electro-catalytic assay was developed for both target fungal DNA using functionalised magnetic nanoparticles. This was extremely sensitive, able to detect a single spore within a raw total plant nucleic acid extract background. We believe that the translation of this technology to the field will enable quantitative assessment of pathogen load for future accurate decision support of informed botrytis grey mould management.

Keywords: biosensor, botrytis grey mould, sensitive, species specific

Procedia PDF Downloads 172
25637 Reliability Analysis of Construction Schedule Plan Based on Building Information Modelling

Authors: Lu Ren, You-Liang Fang, Yan-Gang Zhao

Abstract:

In recent years, the application of BIM (Building Information Modelling) to construction schedule plan has been the focus of more and more researchers. In order to assess the reasonable level of the BIM-based construction schedule plan, that is whether the schedule can be completed on time, some researchers have introduced reliability theory to evaluate. In the process of evaluation, the uncertain factors affecting the construction schedule plan are regarded as random variables, and probability distributions of the random variables are assumed to be normal distribution, which is determined using two parameters evaluated from the mean and standard deviation of statistical data. However, in practical engineering, most of the uncertain influence factors are not normal random variables. So the evaluation results of the construction schedule plan will be unreasonable under the assumption that probability distributions of random variables submitted to the normal distribution. Therefore, in order to get a more reasonable evaluation result, it is necessary to describe the distribution of random variables more comprehensively. For this purpose, cubic normal distribution is introduced in this paper to describe the distribution of arbitrary random variables, which is determined by the first four moments (mean, standard deviation, skewness and kurtosis). In this paper, building the BIM model firstly according to the design messages of the structure and making the construction schedule plan based on BIM, then the cubic normal distribution is used to describe the distribution of the random variables due to the collecting statistical data of the random factors influencing construction schedule plan. Next the reliability analysis of the construction schedule plan based on BIM can be carried out more reasonably. Finally, the more accurate evaluation results can be given providing reference for the implementation of the actual construction schedule plan. In the last part of this paper, the more efficiency and accuracy of the proposed methodology for the reliability analysis of the construction schedule plan based on BIM are conducted through practical engineering case.

Keywords: BIM, construction schedule plan, cubic normal distribution, reliability analysis

Procedia PDF Downloads 146
25636 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: mutex task generation, data augmentation, meta-learning, text classification.

Procedia PDF Downloads 141
25635 Deep Learning and Accurate Performance Measure Processes for Cyber Attack Detection among Web Logs

Authors: Noureddine Mohtaram, Jeremy Patrix, Jerome Verny

Abstract:

As an enormous number of online services have been developed into web applications, security problems based on web applications are becoming more serious now. Most intrusion detection systems rely on each request to find the cyber-attack rather than on user behavior, and these systems can only protect web applications against known vulnerabilities rather than certain zero-day attacks. In order to detect new attacks, we analyze the HTTP protocols of web servers to divide them into two categories: normal attacks and malicious attacks. On the other hand, the quality of the results obtained by deep learning (DL) in various areas of big data has given an important motivation to apply it to cybersecurity. Deep learning for attack detection in cybersecurity has the potential to be a robust tool from small transformations to new attacks due to its capability to extract more high-level features. This research aims to take a new approach, deep learning to cybersecurity, to classify these two categories to eliminate attacks and protect web servers of the defense sector which encounters different web traffic compared to other sectors (such as e-commerce, web app, etc.). The result shows that by using a machine learning method, a higher accuracy rate, and a lower false alarm detection rate can be achieved.

Keywords: anomaly detection, HTTP protocol, logs, cyber attack, deep learning

Procedia PDF Downloads 208
25634 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming

Authors: Milind Chaudhari, Suhail Balasinor

Abstract:

Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.

Keywords: big data, IoT, vertical farming, indoor farming

Procedia PDF Downloads 173
25633 Unlocking Academic Success: A Comprehensive Exploration of Shaguf Bites’s Impact on Learning and Retention

Authors: Joud Zagzoog, Amira Aldabbagh, Radiyah Hamidaddin

Abstract:

This research aims to test out and observe whether artificial intelligence (AI) software and applications could actually be effective, useful, and time-saving for those who use them. Shaguf Bites, a web application that uses AI technology, claims to help students study and memorize information more effectively in less time. The website uses smart learning, or AI-powered bite-sized repetitive learning, by transforming documents or PDFs with the help of AI into summarized interactive smart flashcards (Bites, n.d.). To properly test out the websites’ effectiveness, both qualitative and quantitative methods were used in this research. An experiment was conducted on a number of students where they were first requested to use Shaguf Bites without any prior knowledge or explanation of how to use it. Second, they were asked for feedback through a survey on how their experience was after using it and whether it was helpful, efficient, time-saving, and easy to use for studying. After reviewing the collected data, we found out that the majority of students found the website to be straightforward and easy to use. 58% of the respondents agreed that the website accurately formulated the flashcard questions. And 53% of them reported that they are most likely to use the website again in the future as well as recommend it to others. Overall, from the given results, it is clear that Shaguf Bites have proved to be very beneficial, accurate, and time saving for the majority of the students.

Keywords: artificial intelligence (AI), education, memorization, spaced repetition, flashcards.

Procedia PDF Downloads 186
25632 Data Challenges Facing Implementation of Road Safety Management Systems in Egypt

Authors: A. Anis, W. Bekheet, A. El Hakim

Abstract:

Implementing a Road Safety Management System (SMS) in a crowded developing country such as Egypt is a necessity. Beginning a sustainable SMS requires a comprehensive reliable data system for all information pertinent to road crashes. In this paper, a survey for the available data in Egypt and validating it for using in an SMS in Egypt. The research provides some missing data, and refer to the unavailable data in Egypt, looking forward to the contribution of the scientific society, the authorities, and the public in solving the problem of missing or unreliable crash data. The required data for implementing an SMS in Egypt are divided into three categories; the first is available data such as fatality and injury rates and it is proven in this research that it may be inconsistent and unreliable, the second category of data is not available, but it may be estimated, an example of estimating vehicle cost is available in this research, the third is not available and can be measured case by case such as the functional and geometric properties of a facility. Some inquiries are provided in this research for the scientific society, such as how to improve the links among stakeholders of road safety in order to obtain a consistent, non-biased, and reliable data system.

Keywords: road safety management system, road crash, road fatality, road injury

Procedia PDF Downloads 145
25631 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China

Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan

Abstract:

The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368

Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32

Procedia PDF Downloads 175
25630 Estimation of Mobility Parameters and Threshold Voltage of an Organic Thin Film Transistor Using an Asymmetric Capacitive Test Structure

Authors: Rajesh Agarwal

Abstract:

Carrier mobility at the organic/insulator interface is essential to the performance of organic thin film transistors (OTFT). The present work describes estimation of field dependent mobility (FDM) parameters and the threshold voltage of an OTFT using a simple, easy to fabricate two terminal asymmetric capacitive test structure using admittance measurements. Conventionally, transfer characteristics are used to estimate the threshold voltage in an OTFT with field independent mobility (FIDM). Yet, this technique breaks down to give accurate results for devices with high contact resistance and having field dependent mobility. In this work, a new technique is presented for characterization of long channel organic capacitor (LCOC). The proposed technique helps in the accurate estimation of mobility enhancement factor (γ), the threshold voltage (V_th) and band mobility (µ₀) using capacitance-voltage (C-V) measurement in OTFT. This technique also helps to get rid of making short channel OTFT or metal-insulator-metal (MIM) structures for making C-V measurements. To understand the behavior of devices and ease of analysis, transmission line compact model is developed. The 2-D numerical simulation was carried out to illustrate the correctness of the model. Results show that proposed technique estimates device parameters accurately even in the presence of contact resistance and field dependent mobility. Pentacene/Poly (4-vinyl phenol) based top contact bottom-gate OTFT’s are fabricated to illustrate the operation and advantages of the proposed technique. Small signal of frequency varying from 1 kHz to 5 kHz and gate potential ranging from +40 V to -40 V have been applied to the devices for measurement.

Keywords: capacitance, mobility, organic, thin film transistor

Procedia PDF Downloads 164
25629 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 164
25628 Big Data-Driven Smart Policing: Big Data-Based Patrol Car Dispatching in Abu Dhabi, UAE

Authors: Oualid Walid Ben Ali

Abstract:

Big Data has become one of the buzzwords today. The recent explosion of digital data has led the organization, either private or public, to a new era towards a more efficient decision making. At some point, business decided to use that concept in order to learn what make their clients tick with phrases like ‘sales funnel’ analysis, ‘actionable insights’, and ‘positive business impact’. So, it stands to reason that Big Data was viewed through green (read: money) colored lenses. Somewhere along the line, however someone realized that collecting and processing data doesn’t have to be for business purpose only, but also could be used for other purposes to assist law enforcement or to improve policing or in road safety. This paper presents briefly, how Big Data have been used in the fields of policing order to improve the decision making process in the daily operation of the police. As example, we present a big-data driven system which is sued to accurately dispatch the patrol cars in a geographic environment. The system is also used to allocate, in real-time, the nearest patrol car to the location of an incident. This system has been implemented and applied in the Emirate of Abu Dhabi in the UAE.

Keywords: big data, big data analytics, patrol car allocation, dispatching, GIS, intelligent, Abu Dhabi, police, UAE

Procedia PDF Downloads 490
25627 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 115
25626 Assessing Denitrification-Disintegration Model’s Efficacy in Simulating Greenhouse Gas Emissions, Crop Growth, Yield, and Soil Biochemical Processes in Moroccan Context

Authors: Mohamed Boullouz, Mohamed Louay Metougui

Abstract:

Accurate modeling of greenhouse gas (GHG) emissions, crop growth, soil productivity, and biochemical processes is crucial considering escalating global concerns about climate change and the urgent need to improve agricultural sustainability. The application of the denitrification-disintegration (DNDC) model in the context of Morocco's unique agro-climate is thoroughly investigated in this study. Our main research hypothesis is that the DNDC model offers an effective and powerful tool for precisely simulating a wide range of significant parameters, including greenhouse gas emissions, crop growth, yield potential, and complex soil biogeochemical processes, all consistent with the intricate features of environmental Moroccan agriculture. In order to verify these hypotheses, a vast amount of field data covering Morocco's various agricultural regions and encompassing a range of soil types, climatic factors, and crop varieties had to be gathered. These experimental data sets will serve as the foundation for careful model calibration and subsequent validation, ensuring the accuracy of simulation results. In conclusion, the prospective research findings add to the global conversation on climate-resilient agricultural practices while encouraging the promotion of sustainable agricultural models in Morocco. A policy architect's and an agricultural actor's ability to make informed decisions that not only advance food security but also environmental stability may be strengthened by the impending recognition of the DNDC model as a potent simulation tool tailored to Moroccan conditions.

Keywords: greenhouse gas emissions, DNDC model, sustainable agriculture, Moroccan cropping systems

Procedia PDF Downloads 63
25625 Evaluating Performance of Value at Risk Models for the MENA Islamic Stock Market Portfolios

Authors: Abderrazek Ben Maatoug, Ibrahim Fatnassi, Wassim Ben Ayed

Abstract:

In this paper we investigate the issue of market risk quantification for Middle East and North Africa (MENA) Islamic market equity. We use Value-at-Risk (VaR) as a measure of potential risk in Islamic stock market, for long and short position, based on Riskmetrics model and the conditional parametric ARCH class model volatility with normal, student and skewed student distribution. The sample consist of daily data for the 2006-2014 of 11 Islamic stock markets indices. We conduct Kupiec and Engle and Manganelli tests to evaluate the performance for each model. The main finding of our empirical results show that (i) the superior performance of VaR models based on the Student and skewed Student distribution, for the significance level of α=1% , for all Islamic stock market indices, and for both long and short trading positions (ii) Risk Metrics model, and VaR model based on conditional volatility with normal distribution provides the best accurate VaR estimations for both long and short trading positions for a significance level of α=5%.

Keywords: value-at-risk, risk management, islamic finance, GARCH models

Procedia PDF Downloads 591
25624 Comparison of Agree Method and Shortest Path Method for Determining the Flow Direction in Basin Morphometric Analysis: Case Study of Lower Tapi Basin, Western India

Authors: Jaypalsinh Parmar, Pintu Nakrani, Bhaumik Shah

Abstract:

Digital Elevation Model (DEM) is elevation data of the virtual grid on the ground. DEM can be used in application in GIS such as hydrological modelling, flood forecasting, morphometrical analysis and surveying etc.. For morphometrical analysis the stream flow network plays a very important role. DEM lacks accuracy and cannot match field data as it should for accurate results of morphometrical analysis. The present study focuses on comparing the Agree method and the conventional Shortest path method for finding out morphometric parameters in the flat region of the Lower Tapi Basin which is located in the western India. For the present study, open source SRTM (Shuttle Radar Topography Mission with 1 arc resolution) and toposheets issued by Survey of India (SOI) were used to determine the morphometric linear aspect such as stream order, number of stream, stream length, bifurcation ratio, mean stream length, mean bifurcation ratio, stream length ratio, length of overland flow, constant of channel maintenance and aerial aspect such as drainage density, stream frequency, drainage texture, form factor, circularity ratio, elongation ratio, shape factor and relief aspect such as relief ratio, gradient ratio and basin relief for 53 catchments of Lower Tapi Basin. Stream network was digitized from the available toposheets. Agree DEM was created by using the SRTM and stream network from the toposheets. The results obtained were used to demonstrate a comparison between the two methods in the flat areas.

Keywords: agree method, morphometric analysis, lower Tapi basin, shortest path method

Procedia PDF Downloads 237
25623 Mining Multicity Urban Data for Sustainable Population Relocation

Authors: Xu Du, Aparna S. Varde

Abstract:

In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. Experiments so far reveal that data mining methods discover useful knowledge from the multicity urban data. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.

Keywords: data mining, environmental modeling, sustainability, urban planning

Procedia PDF Downloads 307
25622 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers

Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato

Abstract:

The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.

Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence

Procedia PDF Downloads 138