Search results for: modeling accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7103

Search results for: modeling accuracy

5363 Numerical Study of Partial Penetration of PVDs In Soft Clay Soils Treatment Along With Surcharge Preloading (Bangkok Airport Case Study)

Authors: Mohammad Mehdi Pardsouie, Mehdi Mokhberi, Seyed Mohammad Ali Zomorodian, Seyed Alireza Nasehi

Abstract:

One of the challenging parts of every project, including prefabricated vertical drains (PVDs), is the determination of the depth of installation and its configuration. In this paper, Geostudio 2018 was used for modeling and verification of the full-scale test embankments (TS1, TS2, and TS3), which were constructed to study the effectiveness of PVDs for accelerating the consolidation and dissipation of the excess pore-pressures resulting from fill placement at Bangkok airport. Different depths and scenarios were modeled and the results were compared and analyzed. Since the ultimate goal is attaining pre-determined settlement, the settlement curve under soil embankment was used for the investigation of the results. It was shown that nearly in all cases, the same results and efficiency might be obtained by partial depth installation of PVDs instead of complete full constant length installation. However, it should be mentioned that because of distinct soil characteristics of clay soils and layers properties of any project, further investigation of full-scale test embankments and modeling is needed prior to finalizing the ultimate design by competent geotechnical consultants.

Keywords: partial penetration, surcharge preloading, excess pore water pressure, Bangkok test embankments

Procedia PDF Downloads 191
5362 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic

Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova

Abstract:

Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.

Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification

Procedia PDF Downloads 96
5361 Evaluation of the Factors Affecting Violence Against Women (Case Study: Couples Referring to Family Counseling Centers in Tehran)

Authors: Hassan Manouchehri

Abstract:

The present study aimed to identify and evaluate the factors affecting violence against women. The statistical population included all couples referring to family counseling centers in Tehran due to domestic violence during the past year. A number of 305 people were selected as a statistical sample using simple random sampling and Cochran's formula in unlimited conditions. A researcher-made questionnaire including 110 items was used for data collection. The face validity and content validity of the questionnaire were confirmed by 30 experts and its reliability was obtained above 0.7 for all studied variables in a preliminary test with 30 subjects and it was acceptable. In order to analyze the data, descriptive statistical methods were used with SPSS software version 22 and inferential statistics were used for modeling structural equations in Smart PLS software version 2. Evaluating the theoretical framework and domestic and foreign studies indicated that, in general, four main factors, including cultural and social factors, economic factors, legal factors, as well as medical factors, underlie violence against women. In addition, structural equation modeling findings indicated that cultural and social factors, economic factors, legal factors, and medical factors affect violence against women.

Keywords: violence against women, cultural and social factors, economic factors, legal factors, medical factors

Procedia PDF Downloads 132
5360 Simulation IDM for Schedule Generation of Slip-Form Operations

Authors: Hesham A. Khalek, Shafik S. Khoury, Remon F. Aziz, Mohamed A. Hakam

Abstract:

Slipforming operation’s linearity is a source of planning complications, and operation is usually subjected to bottlenecks at any point, so careful planning is required in order to achieve success. On the other hand, Discrete-event simulation concepts can be applied to simulate and analyze construction operations and to efficiently support construction scheduling. Nevertheless, preparation of input data for construction simulation is very challenging, time-consuming and human prone-error source. Therefore, to enhance the benefits of using DES in construction scheduling, this study proposes an integrated module to establish a framework for automating the generation of time schedules and decision support for Slipform construction projects, particularly through the project feasibility study phase by using data exchange between project data stored in an Intermediate database, DES and Scheduling software. Using the stored information, proposed system creates construction tasks attribute [e.g. activities durations, material quantities and resources amount], then DES uses all the given information to create a proposal for the construction schedule automatically. This research is considered a demonstration of a flexible Slipform project modeling, rapid scenario-based planning and schedule generation approach that may be of interest to both practitioners and researchers.

Keywords: discrete-event simulation, modeling, construction planning, data exchange, scheduling generation, EZstrobe

Procedia PDF Downloads 368
5359 Diagnostic Accuracy Of Core Biopsy In Patients Presenting With Axillary Lymphadenopathy And Suspected Non-Breast Malignancy

Authors: Monisha Edirisooriya, Wilma Jack, Dominique Twelves, Jennifer Royds, Fiona Scott, Nicola Mason, Arran Turnbull, J. Michael Dixon

Abstract:

Introduction: Excision biopsy has been the investigation of choice for patients presenting with pathological axillary lymphadenopathy without a breast abnormality. Core biopsy of nodes can provide sufficient tissue for diagnosis and has advantages in terms of morbidity and speed of diagnosis. This study evaluates the diagnostic accuracy of core biopsy in patients presenting with axillary lymphadenopathy. Methods: Between 2009 and 2019, 165 patients referred to the Edinburgh Breast Unit had a total of 179 axillary lymph node core biopsies. Results: 152 (92%) of the 165 initial core biopsies were deemed to contain adequate nodal tissue. Core biopsy correctly established malignancy in 75 of the 78 patients with haematological malignancy (96%) and in all 28 patients with metastatic carcinoma (100%) and correctly diagnosed benign changes in 49 of 57 (86%) patients with benign conditions. There were no false positives and no false negatives. In 67 (85.9%) of the 78 patients with hematological malignancy, there was sufficient material in the first core biopsy to allow the pathologist to make an actionable diagnosis and not ask for more tissue sampling prior to treatment. There were no complications of core biopsy. On follow up, none of the patients with benign cores has been shown to have malignancy in the axilla and none with lymphoma had their initial disease incorrectly classified. Conclusions: This study shows that core biopsy is now the investigation of choice for patients presenting with axillary lymphadenopathy even in those suspected as having lymphoma.

Keywords: core biopsy, excision biopsy, axillary lymphadenopathy, non-breast malignancy

Procedia PDF Downloads 233
5358 As Evolved Mechanisms and Cultural Modeling Affect Child Gender Attribution

Authors: Stefano Federici, Alessandro Lepri, Antonella Carrera

Abstract:

Kessler and McKenna in the seventies, and recently Federici and Lepri investigated how an individual attributes gender to a person. By administering nudes of human figures, the scholars have found that the penis more than the vagina and the male sexual characteristics more than the female ones are significantly more salient in the gender attribution process. Federici and Lepri suggested that the asymmetrical salience of sexual characteristics is attributable to evolved decision-making processes for the solution of gender attribution problems to avoid the greatest danger of an (angry) adult male. The present study has observed the behaviour of 60 children, aged between 3 and 6 years, and their parents verifying whether the child gender attribution mechanisms are permeable to cultural stereotypes. The participating children were asked to make a male or a female on a tablet by combining 12 human physical characteristics (long hair, short hair, wide hips, narrow hips, breasts, flat chest, body hair, hairless body, penis, vagina, male face, and female face) and four cloths (male t-shirt, female t-shirt, pants, and skirt) by superimposing one or more of them on a sexually neutral manikin. On the tablet was installed an App, created by authors, to replicate the Kessler and McKenna and Federici and Lepri previous studies. One of the parents of each of the participating children was asked to make a male or a female using the same apparatus used by children. In addition, the participating parents were asked to complete a test, as proposed by Federici and Lepri in their previous study, to compare adult and child processes of gender attribution. The results suggested that children are affected both by evolved mechanisms as adults were (e.g., taking less time to make a male than a female, using the penis more often than the vagina), and by cultural modeling of parental and environmental gender stereotypes (e.g., the genitals were often covered with pants in case the delivery was to make a male and a skirt in the case was to make a female).

Keywords: biological sex, cognitive biases, cultural modeling, gender attribution, evolved decision-making processes

Procedia PDF Downloads 123
5357 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section

Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert

Abstract:

Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.

Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics

Procedia PDF Downloads 247
5356 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 103
5355 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: big data analysis, document classification, multi-category, text mining, topic analysis

Procedia PDF Downloads 262
5354 Solar Building Design Using GaAs PV Cells for Optimum Energy Consumption

Authors: Hadis Pouyafar, D. Matin Alaghmandan

Abstract:

Gallium arsenide (GaAs) solar cells are widely used in applications like spacecraft and satellites because they have a high absorption coefficient and efficiency and can withstand high-energy particles such as electrons and protons. With the energy crisis, there's a growing need for efficiency and cost-effective solar cells. GaAs cells, with their 46% efficiency compared to silicon cells 23% can be utilized in buildings to achieve nearly zero emissions. This way, we can use irradiation and convert more solar energy into electricity. III V semiconductors used in these cells offer performance compared to other technologies available. However, despite these advantages, Si cells dominate the market due to their prices. In our study, we took an approach by using software from the start to gather all information. By doing so, we aimed to design the optimal building that harnesses the full potential of solar energy. Our modeling results reveal a future; for GaAs cells, we utilized the Grasshopper plugin for modeling and optimization purposes. To assess radiation, weather data, solar energy levels and other factors, we relied on the Ladybug and Honeybee plugins. We have shown that silicon solar cells may not always be the choice for meeting electricity demands, particularly when higher power output is required. Therefore, when it comes to power consumption and the available surface area for photovoltaic (PV) installation, it may be necessary to consider efficient solar cell options, like GaAs solar cells. By considering the building requirements and utilizing GaAs technology, we were able to optimize the PV surface area.

Keywords: gallium arsenide (GaAs), optimization, sustainable building, GaAs solar cells

Procedia PDF Downloads 77
5353 Comparative Analysis of Fused Deposition Modeling and Binding-Jet 3D Printing Technologies

Authors: Mohd Javaid, Shahbaz Khan, Abid Haleem

Abstract:

Purpose: Large numbers of 3D printing technologies are now available for sophisticated applications in different fields. Additive manufacturing has established its dominance in design, development, and customisation of the product. In the era of developing technologies, there is a need to identify the appropriate technology for different application. In order to fulfil this need, two widely used printing technologies such as Fused Deposition Modeling (FDM), and Binding-Jet 3D Printing are compared for effective utilisation in the current scenario for different applications. Methodology: Systematic literature review conducted for both technologies with applications and associated factors enabling for the same. Appropriate MCDM tool is used to compare critical factors for both the technologies. Findings: Both technologies have their potential and capabilities to provide better direction to the industry. Additionally, this paper is helpful to develop a decision support system for the proper selection of technologies according to their continuum of applications and associated research and development capability. The vital issue is raw materials, and research-based material development is key to the sustainability of the developed technologies. FDM is a low-cost technology which provides high strength product as compared to binding jet technology. Researcher and companies can take benefits of this study to achieve the required applications in lesser resources. Limitations: Study has undertaken the comparison with the opinion of experts, which may not always be free from bias, and some own limitations of each technology. Originality: Comparison between these technologies will help to identify best-suited technology as per the customer requirements. It also provides development in this different field as per their extensive capability where these technologies can be successfully adopted. Conclusion: FDM and binding jet technology play an active role in industrial development. These help to assist the customisation and production of personalised parts cost-effectively. So, there is a need to understand how these technologies can provide these developments rapidly. These technologies help in easy changes or in making revised versions of the product, which is not easily possible in the conventional manufacturing system. High machine cost, the requirement of skilled human resources, low surface finish, and mechanical strength of product and material changing option is the main limitation of this technology. However, these limitations vary from technology to technology. In the future, these technologies are to be commercially viable for efficient usage in direct manufacturing of varied parts.

Keywords: 3D printing, comparison, fused deposition modeling, FDM, binding jet technology

Procedia PDF Downloads 99
5352 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 132
5351 Groundwater Contamination Assessment and Mitigation Strategies for Water Resource Sustainability: A Concise Review

Authors: Khawar Naeem, Adel Elomri, Adel Zghibi

Abstract:

Contamination leakage from municipal solid waste (MSW) landfills is a serious environmental challenge that poses a threat to interconnected ecosystems. It not only contaminates the soil of the saturated zone, but it also percolates down the earth and contaminates the groundwater (GW). In this concise literature review, an effort is made to understand the environmental hazards posed by this contamination to the soil and groundwater, the type of contamination, and possible solutions proposed in the literature. In the study’s second phase, the MSW management practices are explored as the landfill site dump rate and type of MSW into the landfill site directly depend on the MSW management strategies. Case studies from multiple developed and underdeveloped countries are presented, and the complex MSW management system is investigated from an operational perspective to minimize the contamination of GW. One of the significant tools used in the literature was found to be Systems Dynamic Modeling (SDM), which is a simulation-based approach to study the stakeholder’s approach. By employing the SDM approach, the risk of GW contamination can be reduced by devising effective MSW management policies, ultimately resulting in water resource sustainability and regional sustainable development.

Keywords: groundwater contamination, environmental risk, municipal solid waste management, system dynamic modeling, water resource sustainability, sustainable development

Procedia PDF Downloads 62
5350 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System

Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee

Abstract:

In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.

Keywords: augmented reality framework, server-client model, vision-based tracking, image search

Procedia PDF Downloads 270
5349 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference

Procedia PDF Downloads 101
5348 Modeling of CREB Pathway Induced Gene Induction: From Stimulation to Repression

Authors: K. Julia Rose Mary, Victor Arokia Doss

Abstract:

Electrical and chemical stimulations up-regulate phosphorylaion of CREB, a transcriptional factor that induces its target gene production for memory consolidation and Late Long-Term Potentiation (L-LTP) in CA1 region of the hippocampus. L-LTP requires complex interactions among second-messenger signaling cascade molecules such as cAMP, CAMKII, CAMKIV, MAPK, RSK, PKA, all of which converge to phosphorylate CREB which along with CBP induces the transcription of target genes involved in memory consolidation. A differential equation based model for L-LTP representing stimulus-mediated activation of downstream mediators which confirms the steep, supralinear stimulus-response effects of activation and inhibition was used. The same was extended to accommodate the inhibitory effect of the Inducible cAMP Early Repressor (ICER). ICER is the natural inducible CREB antagonist represses CRE-Mediated gene transcription involved in long-term plasticity for learning and memory. After verifying the sensitivity and robustness of the model, we had simulated it with various empirical levels of repressor concentration to analyse their effect on the gene induction. The model appears to predict the regulatory dynamics of repression on the L-LTP and agrees with the experimental values. The flux data obtained in the simulations demonstrate various aspects of equilibrium between the gene induction and repression.

Keywords: CREB, L-LTP, mathematical modeling, simulation

Procedia PDF Downloads 286
5347 Procedural Protocol for Dual Energy Computed Tomography (DECT) Inversion

Authors: Rezvan Ravanfar Haghighi, S. Chatterjee, Pratik Kumar, V. C. Vani, Priya Jagia, Sanjiv Sharma, Susama Rani Mandal, R. Lakshmy

Abstract:

The dual energy computed tomography (DECT) aims at noting the HU(V) values for the sample at two different voltages V=V1, V2 and thus obtain the electron densities (ρe) and effective atomic number (Zeff) of the substance. In the present paper, we aim to obtain a numerical algorithm by which (ρe, Zeff) can be obtained from the HU(100) and HU(140) data, where V=100, 140 kVp. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques.With the idea to develop the inversion algorithm for low Zeff materials, as is the case with non calcified coronary artery plaque, we prepare aqueous samples whose calculated values of (ρe, Zeff) lie in the range (2.65×1023≤ ρe≤ 3.64×1023 per cc ) and (6.80≤ Zeff ≤ 8.90). We fill the phantom with these known samples and experimentally determine HU(100) and HU(140) for the same pixels. Knowing that the HU(V) values are related to the attenuation coefficient of the system, we present an algorithm by which the (ρe, Zeff) is calibrated with respect to (HU(100), HU(140)). The calibration is done with a known set of 20 samples; its accuracy is checked with a different set of 23 known samples. We find that the calibration gives the ρe with an accuracy of ± 4% while Zeff is found within ±1% of the actual value, the confidence being 95%.In this inversion method (ρe, Zeff) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (ρe, Zeff) does not interfere with each other. It is found that this algorithm can be used for prediction of chemical characteristic (ρe, Zeff) of unknown scanned materials with 95% confidence level, by inversion of the DECT data.

Keywords: chemical composition, dual-energy computed tomography, inversion algorithm

Procedia PDF Downloads 429
5346 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis

Authors: Elcin Timur Cakmak, Ayse Oguzlar

Abstract:

This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.

Keywords: classification algorithms, machine learning, sentiment analysis, Twitter

Procedia PDF Downloads 66
5345 Domain-Specific Languages Evaluation: A Literature Review and Experience Report

Authors: Sofia Meacham

Abstract:

In this abstract paper, the Domain-Specific Languages (DSL) evaluation will be presented based on existing literature and years of experience developing DSLs for several domains. The domains we worked on ranged from AI, business applications, and finances/accounting to health. In general, DSLs have been utilised in many domains to provide tailored and efficient solutions to address specific problems. Although they are a reputable method among highly technical circles and have also been used by non-technical experts with success, according to our knowledge, there isn’t a commonly accepted method for evaluating them. There are some methods that define criteria that are adaptations from the general software engineering quality criteria. Other literature focuses on the DSL usability aspect of evaluation and applies methods such as Human-Computer Interaction (HCI) and goal modeling. All these approaches are either hard to introduce, such as the goal modeling, or seem to ignore the domain-specific focus of the DSLs. From our experience, the DSLs have domain-specificity in their core, and consequently, the methods to evaluate them should also include domain-specific criteria in their core. The domain-specific criteria would require synergy between the domain experts and the DSL developers in the same way that DSLs cannot be developed without domain-experts involvement. Methods from agile and other software engineering practices, such as co-creation workshops, should be further emphasised and explored to facilitate this direction. Concluding, our latest experience and plans for DSLs evaluation will be presented and open for discussion.

Keywords: domain-specific languages, DSL evaluation, DSL usability, DSL quality metrics

Procedia PDF Downloads 97
5344 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 80
5343 Using Virtual Reality Exergaming to Improve Health of College Students

Authors: Juanita Wallace, Mark Jackson, Bethany Jurs

Abstract:

Introduction: Exergames, VR games used as a form of exercise, are being used to reduce sedentary lifestyles in a vast number of populations. However, there is a distinct lack of research comparing the physiological response during VR exergaming to that of traditional exercises. The purpose of this study was to create a foundationary investigation establishing changes in physiological responses resulting from VR exergaming in a college aged population. Methods: In this IRB approved study, college aged students were recruited to play a virtual reality exergame (Beat Saber) on the Oculus Quest 2 (Facebook, 2021) in either a control group (CG) or training group (TG). Both groups consisted of subjects who were not habitual users of virtual reality. The CG played VR one time per week for three weeks and the TG played 150 min/week three weeks. Each group played the same nine Beat Saber songs, in a randomized order, during 30 minute sessions. Song difficulty was increased during play based on song performance. Subjects completed a pre- and posttests at which the following was collected: • Beat Saber Game Metrics: song level played, song score, number of beats completed per song and accuracy (beats completed/total beats) • Physiological Data: heart rate (max and avg.), active calories • Demographics Results: A total of 20 subjects completed the study; nine in the CG (3 males, 6 females) and 11 (5 males, 6 females) in the TG. • Beat Saber Song Metrics: The TG improved performance from a normal/hard difficulty to hard/expert. The CG stayed at the normal/hard difficulty. At the pretest there was no difference in game accuracy between groups. However, at the posttest the CG had a higher accuracy. • Physiological Data (Table 1): Average heart rates were similar between the TG and CG at both the pre- and posttest. However, the TG expended more total calories. Discussion: Due to the lack of peer reviewed literature on c exergaming using Beat Saber, the results of this study cannot be directly compared. However, the results of this study can be compared with the previously established trends for traditional exercise. In traditional exercise, an increase in training volume equates to increased efficiency at the activity. The TG should naturally increase in difficulty at a faster rate than the CG because they played 150 hours per week. Heart rate and caloric responses also increase during traditional exercise as load increases (i.e. speed or resistance). The TG reported an increase in total calories due to a higher difficulty of play. The song accuracy decreases in the TG can be explained by the increased difficulty of play. Conclusion: VR exergaming is comparable to traditional exercise for loads within the 50-70% of maximum heart rate. The ability to use VR for health could motivate individuals who do not engage in traditional exercise. In addition, individuals in health professions can and should promote VR exergaming as a viable way to increase physical activity and improve health in their clients/patients.

Keywords: virtual reality, exergaming, health, heart rate, wellness

Procedia PDF Downloads 177
5342 Increasing the Resilience of Cyber Physical Systems in Smart Grid Environments using Dynamic Cells

Authors: Andrea Tundis, Carlos García Cordero, Rolf Egert, Alfredo Garro, Max Mühlhäuser

Abstract:

Resilience is an important system property that relies on the ability of a system to automatically recover from a degraded state so as to continue providing its services. Resilient systems have the means of detecting faults and failures with the added capability of automatically restoring their normal operations. Mastering resilience in the domain of Cyber-Physical Systems is challenging due to the interdependence of hybrid hardware and software components, along with physical limitations, laws, regulations and standards, among others. In order to overcome these challenges, this paper presents a modeling approach, based on the concept of Dynamic Cells, tailored to the management of Smart Grids. Additionally, a heuristic algorithm that works on top of the proposed modeling approach, to find resilient configurations, has been defined and implemented. More specifically, the model supports a flexible representation of Smart Grids and the algorithm is able to manage, at different abstraction levels, the resource consumption of individual grid elements on the presence of failures and faults. Finally, the proposal is evaluated in a test scenario where the effectiveness of such approach, when dealing with complex scenarios where adequate solutions are difficult to find, is shown.

Keywords: cyber-physical systems, energy management, optimization, smart grids, self-healing, resilience, security

Procedia PDF Downloads 321
5341 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 264
5340 Analysis of Structural Modeling on Digital English Learning Strategy Use

Authors: Gyoomi Kim, Jiyoung Bae

Abstract:

The purpose of this study was to propose a framework that verifies the structural relationships among students’ use of digital English learning strategy (DELS), affective domains, and their individual variables. The study developed a hypothetical model based on previous studies on language learning strategy use as well as digital language learning. The participants were 720 Korean high school students and 430 university students. The instrument was a self-response questionnaire that contained 70 question items based on Oxford’s SILL (Strategy Inventory for Language Learning) as well as the previous studies on language learning strategies in digital learning environment in order to measure DELS and affective domains. The collected data were analyzed through structural equation modeling (SEM). This study used quantitative data analysis procedures: Explanatory factor analysis (EFA) and confirmatory factor analysis (CFA). Firstly, the EFA was conducted in order to verify the hypothetical model; the factor analysis was conducted preferentially to identify the underlying relationships between measured variables of DELS and the affective domain in the EFA process. The hypothetical model was established with six indicators of learning strategies (memory, cognitive, compensation, metacognitive, affective, and social strategies) under the latent variable of the use of DELS. In addition, the model included four indicators (self-confidence, interests, self-regulation, and attitude toward digital learning) under the latent variable of learners’ affective domain. Secondly, the CFA was used to determine the suitability of data and research models, so all data from the present study was used to assess model fits. Lastly, the model also included individual learner factors as covariates and five constructs selected were learners’ gender, the level of English proficiency, the duration of English learning, the period of using digital devices, and previous experience of digital English learning. The results verified from SEM analysis proposed a theoretical model that showed the structural relationships between Korean students’ use of DELS and their affective domains. Therefore, the results of this study help ESL/EFL teachers understand how learners use and develop appropriate learning strategies in digital learning contexts. The pedagogical implication and suggestions for the further study will be also presented.

Keywords: Digital English Learning Strategy, DELS, individual variables, learners' affective domains, Structural Equation Modeling, SEM

Procedia PDF Downloads 116
5339 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 116
5338 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 162
5337 Impact of Welding Distortion on the Design of Fabricated T-Girders Using Finite Element Modeling

Authors: Ahmed Hammad, Yehia Abdel-Nasser, Mohamed Shamma

Abstract:

The main configuration of ship construction consists of standard and fabricated stiffening members which are commonly used in shipbuilding such as fabricated T-sections. During the welding process, the non-uniform heating and rapid cooling lead to the inevitable presence of out-of-plane distortion and welding induced residual stresses. Because of these imperfections, the fabricated structural members may not attain their design load to be carried. The removal of these imperfections will require extra man-hours. In the present work, controlling these imperfections has been investigated at both design and fabrication stages. A typical fabricated T-girder is selected to investigate the problem of these imperfections using double-side welding. A numerical simulation based on finite element (FE) modeling has been used to investigate the effect of different parameters of the selected fabricated T-girder such as geometrical properties and welding sequences on the magnitude of welding imperfections. FE results were compared with the results of experimental model of a double-side fillet weld. The present work concludes that: Firstly, in the design stage, the optimum geometry of the fabricated T- girder is determined based on minimum steel weight and out- of- plane distortion. Secondly, in the fabrication stage, the best welding sequence is determined on the basis of minimum welding out- of- plane distortion.

Keywords: fabricated T-girder, FEM, out-of-plane distortion, section modulus, welding residual stresses

Procedia PDF Downloads 114
5336 Competitive Adsorption of Heavy Metals onto Natural and Activated Clay: Equilibrium, Kinetics and Modeling

Authors: L. Khalfa, M. Bagane, M. L. Cervera, S. Najjar

Abstract:

The aim of this work is to present a low cost adsorbent for removing toxic heavy metals from aqueous solutions. Therefore, we are interested to investigate the efficiency of natural clay minerals collected from south Tunisia and their modified form using sulfuric acid in the removal of toxic metal ions: Zn(II) and Pb(II) from synthetic waste water solutions. The obtained results indicate that metal uptake is pH-dependent and maximum removal was detected to occur at pH 6. Adsorption equilibrium is very rapid and it was achieved after 90 min for both metal ions studied. The kinetics results show that the pseudo-second-order model describes the adsorption and the intraparticle diffusion models are the limiting step. The treatment of natural clay with sulfuric acid creates more active sites and increases the surface area, so it showed an increase of the adsorbed quantities of lead and zinc in single and binary systems. The competitive adsorption study showed that the uptake of lead was inhibited in the presence of 10 mg/L of zinc. An antagonistic binary adsorption mechanism was observed. These results revealed that clay is an effective natural material for removing lead and zinc in single and binary systems from aqueous solution.

Keywords: heavy metal, activated clay, kinetic study, competitive adsorption, modeling

Procedia PDF Downloads 214
5335 Finite Element Modeling of Friction Stir Welding of Dissimilar Alloys

Authors: Fadi Al-Badour, Nesar Merah, Abdelrahman Shuaib, Abdelaziz Bazoune

Abstract:

In the current work, a Coupled Eulerian Lagrangian (CEL) model is developed to simulate the friction stir welding (FSW) process of dissimilar Aluminum alloys (Al 6061-T6 with Al 5083-O). The model predicts volumetric defects, material flow, developed temperatures, and stresses in addition to tool reaction loads. Simulation of welding phase is performed by employing a control volume approach, whereas the welding speed is defined as inflow and outflow over Eulerian domain boundaries. Only material softening due to inelastic heat generation is considered and material behavior is assumed to obey Johnson-Cook’s Model. The model was validated using published experimentally measured temperatures, at similar welding conditions, and by qualitative comparison of dissimilar weld microstructure. The FE results showed that most of developed temperatures were below melting and that the bulk of the deformed material in solid state. The temperature gradient on AL6061-T6 side was found to be less than that of Al 5083-O. Changing the position Al 6061-T6 from retreating (Ret.) side to advancing (Adv.) side led to a decrease in maximum process temperature and strain rate. This could be due to the higher resistance of Al 6061-T6 to flow as compared to Al 5083-O.

Keywords: friction stir welding, dissimilar metals, finite element modeling, coupled Eulerian Lagrangian Analysis

Procedia PDF Downloads 322
5334 The Impact of Temperature on the Threshold Capillary Pressure of Fine-Grained Shales

Authors: Talal Al-Bazali, S. Mohammad

Abstract:

The threshold capillary pressure of shale caprocks is an important parameter in CO₂ storage modeling. A correct estimation of the threshold capillary pressure is not only essential for CO₂ storage modeling but also important to assess the overall economical and environmental impact of the design process. A standard step by step approach has to be used to measure the threshold capillary pressure of shale and non-wetting fluids at different temperatures. The objective of this work is to assess the impact of high temperature on the threshold capillary pressure of four different shales as they interacted with four different oil based muds, air, CO₂, N₂, and methane. This study shows that the threshold capillary pressure of shale and non-wetting fluid is highly impacted by temperature. An empirical correlation for the dependence of threshold capillary pressure on temperature when different shales interacted with oil based muds and gasses has been developed. This correlation shows that the threshold capillary pressure decreases exponentially as the temperature increases. In this correlation, an experimental constant (α) appears, and this constant may depend on the properties of shale and non-wetting fluid. The value for α factor was found to be higher for gasses than for oil based muds. This is consistent with our intuition since the interfacial tension for gasses is higher than those for oil based muds. The author believes that measured threshold capillary pressure at ambient temperature is misleading and could yield higher values than those encountered at in situ conditions. Therefore one must correct for the impact of temperature when measuring threshold capillary pressure of shale at ambient temperature.

Keywords: capillary pressure, shale, temperature, thresshold

Procedia PDF Downloads 363