Search results for: geometric and topological data models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29134

Search results for: geometric and topological data models

28504 Regression Analysis in Estimating Stream-Flow and the Effect of Hierarchical Clustering Analysis: A Case Study in Euphrates-Tigris Basin

Authors: Goksel Ezgi Guzey, Bihrat Onoz

Abstract:

The scarcity of streamflow gauging stations and the increasing effects of global warming cause designing water management systems to be very difficult. This study is a significant contribution to assessing regional regression models for estimating streamflow. In this study, simulated meteorological data was related to the observed streamflow data from 1971 to 2020 for 33 stream gauging stations of the Euphrates-Tigris Basin. Ordinary least squares regression was used to predict flow for 2020-2100 with the simulated meteorological data. CORDEX- EURO and CORDEX-MENA domains were used with 0.11 and 0.22 grids, respectively, to estimate climate conditions under certain climate scenarios. Twelve meteorological variables simulated by two regional climate models, RCA4 and RegCM4, were used as independent variables in the ordinary least squares regression, where the observed streamflow was the dependent variable. The variability of streamflow was then calculated with 5-6 meteorological variables and watershed characteristics such as area and height prior to the application. Of the regression analysis of 31 stream gauging stations' data, the stations were subjected to a clustering analysis, which grouped the stations in two clusters in terms of their hydrometeorological properties. Two streamflow equations were found for the two clusters of stream gauging stations for every domain and every regional climate model, which increased the efficiency of streamflow estimation by a range of 10-15% for all the models. This study underlines the importance of homogeneity of a region in estimating streamflow not only in terms of the geographical location but also in terms of the meteorological characteristics of that region.

Keywords: hydrology, streamflow estimation, climate change, hydrologic modeling, HBV, hydropower

Procedia PDF Downloads 124
28503 Numerical Analysis of the Coanda Effect on the Classical Interior Ejectors

Authors: Alexandru Dumitrache, Florin Frunzulica, Octavian Preotu

Abstract:

The flow mitigation detachment problem near solid surfaces, resulting in improved globally aerodynamic performance by exploiting the Coanda effect on surfaces, has been addressed extensively in the literature, since 1940. The research is carried on and further developed, using modern means of calculation and new experimental methods. In this paper, it is shown interest in the detailed behavior of a classical interior ejector assisted by the Coanda effect, used in propulsion systems. For numerical investigations, an implicit formulation of RANS equations for axisymmetric flow with a shear stress transport k- ω (SST model) turbulence model is used. The obtained numerical results emphasize the efficiency of the ejector, depending on the physical parameters of the flow and the geometric configuration. Furthermore, numerical investigations are carried out regarding the evolution of the Reynolds number when the jet is attached to the wall, considering three geometric configurations: sudden expansion, open cavity and sudden expansion with divergent at the inlet. Therefore, further insight into complexities involving issues such as the variety of flow structure and the related bifurcation and flow instabilities are provided. Thus, the conditions and the limits within which one can benefit from the advantages of Coanda-type flows are determined.

Keywords: Coanda effect, Coanda ejector, CFD, stationary bifurcation, sudden expansion

Procedia PDF Downloads 209
28502 Copper Price Prediction Model for Various Economic Situations

Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

Copper is an essential raw material used in the construction industry. During the year 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war, which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two ANN-LSTM price prediction models, using Python, that can forecast the average monthly copper prices traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022, and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices and economic indicators of the three major exporting countries of copper, depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-Month prediction model is better than the 1-Month prediction model, but still, both models can act as predicting tools for diverse economic situations.

Keywords: copper prices, prediction model, neural network, time series forecasting

Procedia PDF Downloads 108
28501 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 350
28500 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region

Authors: Mohammad Bakhshi, Firas Al Janabi

Abstract:

High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.

Keywords: DiMoN Tool, disaggregation, exceedance probability, Kolmogorov-Smirnov test, rainfall

Procedia PDF Downloads 197
28499 Linking Business Process Models and System Models Based on Business Process Modelling

Authors: Faisal A. Aburub

Abstract:

Organizations today need to invest in software in order to run their businesses, and to the organizations’ objectives, the software should be in line with the business process. This research presents an approach for linking process models and system models. Particularly, the new approach aims to synthesize sequence diagram based on role activity diagram (RAD) model. The approach includes four steps namely: Create business process model using RAD, identify computerized activities, identify entities in sequence diagram and identify messages in sequence diagram. The new approach has been validated using the process of student registration in University of Petra as a case study. Further research is required to validate the new approach using different domains.

Keywords: business process modelling, system models, role activity diagrams, sequence diagrams

Procedia PDF Downloads 378
28498 Speed Breaker/Pothole Detection Using Hidden Markov Models: A Deep Learning Approach

Authors: Surajit Chakrabarty, Piyush Chauhan, Subhasis Panda, Sujoy Bhattacharya

Abstract:

A large proportion of roads in India are not well maintained as per the laid down public safety guidelines leading to loss of direction control and fatal accidents. We propose a technique to detect speed breakers and potholes using mobile sensor data captured from multiple vehicles and provide a profile of the road. This would, in turn, help in monitoring roads and revolutionize digital maps. Incorporating randomness in the model formulation for detection of speed breakers and potholes is crucial due to substantial heterogeneity observed in data obtained using a mobile application from multiple vehicles driven by different drivers. This is accomplished with Hidden Markov Models, whose hidden state sequence is found for each time step given the observables sequence, and are then fed as input to LSTM network with peephole connections. A precision score of 0.96 and 0.63 is obtained for classifying bumps and potholes, respectively, a significant improvement from the machine learning based models. Further visualization of bumps/potholes is done by converting time series to images using Markov Transition Fields where a significant demarcation among bump/potholes is observed.

Keywords: deep learning, hidden Markov model, pothole, speed breaker

Procedia PDF Downloads 140
28497 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 133
28496 Geostatistical Models to Correct Salinity of Soils from Landsat Satellite Sensor: Application to the Oran Region, Algeria

Authors: Dehni Abdellatif, Lounis Mourad

Abstract:

The new approach of applied spatial geostatistics in materials sciences, agriculture accuracy, agricultural statistics, permitted an apprehension of managing and monitoring the water and groundwater qualities in a relationship with salt-affected soil. The anterior experiences concerning data acquisition, spatial-preparation studies on optical and multispectral data has facilitated the integration of correction models of electrical conductivity related with soils temperature (horizons of soils). For tomography apprehension, this physical parameter has been extracted from calibration of the thermal band (LANDSAT ETM+6) with a radiometric correction. Our study area is Oran region (Northern West of Algeria). Different spectral indices are determined such as salinity and sodicity index, the Combined Spectral Reflectance Index (CSRI), Normalized Difference Vegetation Index (NDVI), emissivity, Albedo, and Sodium Adsorption Ratio (SAR). The approach of geostatistical modeling of electrical conductivity (salinity), appears to be a useful decision support system for estimating corrected electrical resistivity related to the temperature of surface soils, according to the conversion models by substitution, the reference temperature at 25°C (where hydrochemical data are collected with this constraint). The Brightness temperatures extracted from satellite reflectance (LANDSAT ETM+) are used in consistency models to estimate electrical resistivity. The confusions that arise from the effects of salt stress and water stress removed followed by seasonal application of the geostatistical analysis in Geographic Information System (GIS) techniques investigation and monitoring the variation of the electrical conductivity in the alluvial aquifer of Es-Sénia for the salt-affected soil.

Keywords: geostatistical modelling, landsat, brightness temperature, conductivity

Procedia PDF Downloads 436
28495 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 125
28494 Optimizing Nitrogen Fertilizer Application in Rice Cultivation: A Decision Model for Top and Ear Dressing Dosages

Authors: Ya-Li Tsai

Abstract:

Nitrogen is a vital element crucial for crop growth, significantly influencing crop yield. In rice cultivation, farmers often apply substantial nitrogen fertilizer to maximize yields. However, excessive nitrogen application increases the risk of lodging and pest infestation, leading to yield losses. Additionally, conventional flooded irrigation methods consume significant water resources, necessitating precise agricultural and intelligent water management systems. In this study, it leveraged physiological data and field images captured by unmanned aerial vehicles, considering fertilizer treatment and irrigation as key factors. Statistical models incorporating rice physiological data, yield, and vegetation indices from image data were developed. Missing physiological data were addressed using multiple imputation and regression methods, and regression models were established using principal component analysis and stepwise regression. Target nitrogen accumulation at key growth stages was identified to optimize fertilizer application, with the difference between actual and target nitrogen accumulation guiding recommendations for ear dressing dosage. Field experiments conducted in 2022 validated the recommended ear dressing dosage, demonstrating no significant difference in final yield compared to traditional fertilizer levels under alternate wetting and drying irrigation. These findings highlight the efficacy of applying recommended dosages based on fertilizer decision models, offering the potential for reduced fertilizer use while maintaining yield in rice cultivation.

Keywords: intelligent fertilizer management, nitrogen top and ear dressing fertilizer, rice, yield optimization

Procedia PDF Downloads 69
28493 Neighborhood Graph-Optimized Preserving Discriminant Analysis for Image Feature Extraction

Authors: Xiaoheng Tan, Xianfang Li, Tan Guo, Yuchuan Liu, Zhijun Yang, Hongye Li, Kai Fu, Yufang Wu, Heling Gong

Abstract:

The image data collected in reality often have high dimensions, and it contains noise and redundant information. Therefore, it is necessary to extract the compact feature expression of the original perceived image. In this process, effective use of prior knowledge such as data structure distribution and sample label is the key to enhance image feature discrimination and robustness. Based on the above considerations, this paper proposes a local preserving discriminant feature learning model based on graph optimization. The model has the following characteristics: (1) Locality preserving constraint can effectively excavate and preserve the local structural relationship between data. (2) The flexibility of graph learning can be improved by constructing a new local geometric structure graph using label information and the nearest neighbor threshold. (3) The L₂,₁ norm is used to redefine LDA, and the diagonal matrix is introduced as the scale factor of LDA, and the samples are selected, which improves the robustness of feature learning. The validity and robustness of the proposed algorithm are verified by experiments in two public image datasets.

Keywords: feature extraction, graph optimization local preserving projection, linear discriminant analysis, L₂, ₁ norm

Procedia PDF Downloads 146
28492 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 155
28491 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 356
28490 Compromising Relevance for Elegance: A Danger of Dominant Growth Models for Backward Economies

Authors: Givi Kupatadze

Abstract:

Backward economies are facing a challenge of achieving sustainable high economic growth rate. Dominant growth models represent a roadmap in framing economic development strategy. This paper examines a relevance of the dominant growth models for backward economies. Cobb-Douglas production function, the Harrod-Domar model of economic growth, the Solow growth model and general formula of gross domestic product are examined to undertake a comprehensive study of the dominant growth models. Deductive research method allows to uncover major weaknesses of the dominant growth models and to come up with practical implications for economic development strategy. The key finding of the paper shows, contrary to what used to be taught by textbooks of economics, that constant returns to scale property of the dominant growth models are a mere coincidence and its generalization over space and time can be regarded as one of the most unfortunate mistakes in the whole field of political economy. The major suggestion of the paper for backward economies is that understanding and considering taxonomy of economic activities based on increasing and diminishing returns to scale represent a cornerstone of successful economic development strategy.

Keywords: backward economies, constant returns to scale, dominant growth models, taxonomy of economic activities

Procedia PDF Downloads 368
28489 GeoWeb at the Service of Household Waste Collection in Urban Areas

Authors: Abdessalam Hijab, Eric Henry, Hafida Boulekbache

Abstract:

The complexity of the city makes sustainable management of the urban environment more difficult. Managers are required to make significant human and technical investments, particularly in household waste collection (focus of our research). The aim of this communication is to propose a collaborative geographic multi-actor device (MGCD) based on the link between information and communication technologies (ICT) and geo-web tools in order to involve urban residents in household waste collection processes. Our method is based on a collaborative/motivational concept between the city and its residents. It is a geographic collaboration dedicated to the general public (citizens, residents, and any other participant), based on real-time allocation and geographic location of topological, geographic, and multimedia data in the form of local geo-alerts (location-specific problems) related to household waste in an urban environment. This contribution allows us to understand the extent to which residents can assist and contribute to the development of household waste collection processes for a better protected urban environment. This suggestion provides a good idea of how residents can contribute to the data bank for future uses. Moreover, it will contribute to the transformation of the population into a smart inhabitant as an essential component of a smart city. The proposed model will be tested in the Lamkansa sampling district in Casablanca, Morocco.

Keywords: information and communication technologies, ICTs, GeoWeb, geo-collaboration, city, inhabitant, waste, collection, environment

Procedia PDF Downloads 122
28488 Investigating the performance of machine learning models on PM2.5 forecasts: A case study in the city of Thessaloniki

Authors: Alexandros Pournaras, Anastasia Papadopoulou, Serafim Kontos, Anastasios Karakostas

Abstract:

The air quality of modern cities is an important concern, as poor air quality contributes to human health and environmental issues. Reliable air quality forecasting has, thus, gained scientific and governmental attention as an essential tool that enables authorities to take proactive measures for public safety. In this study, the potential of Machine Learning (ML) models to forecast PM2.5 at local scale is investigated in the city of Thessaloniki, the second largest city in Greece, which has been struggling with the persistent issue of air pollution. ML models, with proven ability to address timeseries forecasting, are employed to predict the PM2.5 concentrations and the respective Air Quality Index 5-days ahead by learning from daily historical air quality and meteorological data from 2014 to 2016 and gathered from two stations with different land use characteristics in the urban fabric of Thessaloniki. The performance of the ML models on PM2.5 concentrations is evaluated with common statistical methods, such as R squared (r²) and Root Mean Squared Error (RMSE), utilizing a portion of the stations’ measurements as test set. A multi-categorical evaluation is utilized for the assessment of their performance on respective AQIs. Several conclusions were made from the experiments conducted. Experimenting on MLs’ configuration revealed a moderate effect of various parameters and training schemas on the model’s predictions. Their performance of all these models were found to produce satisfactory results on PM2.5 concentrations. In addition, their application on untrained stations showed that these models can perform well, indicating a generalized behavior. Moreover, their performance on AQI was even better, showing that the MLs can be used as predictors for AQI, which is the direct information provided to the general public.

Keywords: Air Quality, AQ Forecasting, AQI, Machine Learning, PM2.5

Procedia PDF Downloads 73
28487 Artificial Intelligence Methods in Estimating the Minimum Miscibility Pressure Required for Gas Flooding

Authors: Emad A. Mohammed

Abstract:

Utilizing the capabilities of Data Mining and Artificial Intelligence in the prediction of the minimum miscibility pressure (MMP) required for multi-contact miscible (MCM) displacement of reservoir petroleum by hydrocarbon gas flooding using Fuzzy Logic models and Artificial Neural Network models will help a lot in giving accurate results. The factors affecting the (MMP) as it is proved from the literature and from the dataset are as follows: XC2-6: Intermediate composition in the oil-containing C2-6, CO2 and H2S, in mole %, XC1: Amount of methane in the oil (%),T: Temperature (°C), MwC7+: Molecular weight of C7+ (g/mol), YC2+: Mole percent of C2+ composition in injected gas (%), MwC2+: Molecular weight of C2+ in injected gas. Fuzzy Logic and Neural Networks have been used widely in prediction and classification, with relatively high accuracy, in different fields of study. It is well known that the Fuzzy Inference system can handle uncertainty within the inputs such as in our case. The results of this work showed that our proposed models perform better with higher performance indices than other emprical correlations.

Keywords: MMP, gas flooding, artificial intelligence, correlation

Procedia PDF Downloads 139
28486 Factors Influencing Soil Organic Carbon Storage Estimation in Agricultural Soils: A Machine Learning Approach Using Remote Sensing Data Integration

Authors: O. Sunantha, S. Zhenfeng, S. Phattraporn, A. Zeeshan

Abstract:

The decline of soil organic carbon (SOC) in global agriculture is a critical issue requiring rapid and accurate estimation for informed policymaking. While it is recognized that SOC predictors vary significantly when derived from remote sensing data and environmental variables, identifying the specific parameters most suitable for accurately estimating SOC in diverse agricultural areas remains a challenge. This study utilizes remote sensing data to precisely estimate SOC and identify influential factors in diverse agricultural areas, such as paddy, corn, sugarcane, cassava, and perennial crops. Extreme gradient boosting (XGBoost), random forest (RF), and support vector regression (SVR) models are employed to analyze these factors' impact on SOC estimation. The results show key factors influencing SOC estimation include slope, vegetation indices (EVI), spectral reflectance indices (red index, red edge2), temperature, land use, and surface soil moisture, as indicated by their averaged importance scores across XGBoost, RF, and SVR models. Therefore, using different machine learning algorithms for SOC estimation reveals varying influential factors from remote sensing data and environmental variables. This approach emphasizes feature selection, as different machine learning algorithms identify various key factors from remote sensing data and environmental variables for accurate SOC estimation.

Keywords: factors influencing SOC estimation, remote sensing data, environmental variables, machine learning

Procedia PDF Downloads 25
28485 A Critical Discourse Analysis of Jamaican and Trinidadian News Articles about D/Deafness

Authors: Melissa Angus Baboun

Abstract:

Utilizing a Critical Discourse Analysis (CDA) methodology and a theoretical framework based on disability studies, how Jamaican and Trinidadian newspapers discussed issues relating to the Deaf community were examined. The term deaf was inputted into the search engine tool of the online website for the Jamaica Observer and the Trinidad & Tobago Guardian. All 27 articles that contained the term deaf in its content and were written between August 1, 2017 and November 15, 2017 were chosen for the study. The data analysis was divided into three steps: (1) listing and analysis instances of metaphorical deafness (e.g. fall on deaf ears), (2) categorization of the content of the articles into the models of disability discourse (the medical, socio-cultural, and superscrip models of disability narratives), and (3) the analysis of any additional data found. A total of 42% of the articles pulled for this study did not deal with the Deaf community in any capacity, but rather instances of the use of idiomatic expressions that use deafness as a metaphor for a non-physical, undesirable trait. The most common idiomatic expression found was fall on deaf ears. Regarding the models of disability discourse, eight articles were found to follow the socio-cultural model, two were found to follow the medical model, and two were found to follow the superscrip model. The additional data found in these articles include two instances of the term deaf and mute, an overwhelming use of lower case d for the term deaf, and the misuse of the term translator (to mean interpreter).

Keywords: deafness, disability, news coverage, Caribbean newspapers

Procedia PDF Downloads 231
28484 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction

Authors: Zhengrong Wu, Haibo Yang

Abstract:

In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.

Keywords: large language model, knowledge graph, disaster, deep learning

Procedia PDF Downloads 46
28483 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach

Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip

Abstract:

The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.

Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method

Procedia PDF Downloads 127
28482 On Bianchi Type Cosmological Models in Lyra’s Geometry

Authors: R. K. Dubey

Abstract:

Bianchi type cosmological models have been studied on the basis of Lyra’s geometry. Exact solution has been obtained by considering a time dependent displacement field for constant deceleration parameter and varying cosmological term of the universe. The physical behavior of the different models has been examined for different cases.

Keywords: Bianchi type-I cosmological model, variable gravitational coupling, cosmological constant term, Lyra's model

Procedia PDF Downloads 348
28481 Topological Indices of Some Graph Operations

Authors: U. Mary

Abstract:

Let be a graph with a finite, nonempty set of objects called vertices together with a set of unordered pairs of distinct vertices of called edges. The vertex set is denoted by and the edge set by. Given two graphs and the wiener index of, wiener index for the splitting graph of a graph, the first Zagreb index of and its splitting graph, the 3-steiner wiener index of, the 3-steiner wiener index of a special graph are explored in this paper.

Keywords: complementary prism graph, first Zagreb index, neighborhood corona graph, steiner distance, splitting graph, steiner wiener index, wiener index

Procedia PDF Downloads 566
28480 Voxel Models as Input for Heat Transfer Simulations with Siemens NX Based on X-Ray Microtomography Images of Random Fibre Reinforced Composites

Authors: Steven Latré, Frederik Desplentere, Ilya Straumit, Stepan V. Lomov

Abstract:

A method is proposed in order to create a three-dimensional finite element model representing fibre reinforced insulation materials for the simulation software Siemens NX. VoxTex software, a tool for quantification of µCT images of fibrous materials, is used for the transformation of microtomography images of random fibre reinforced composites into finite element models. An automatic tool was developed to execute the import of the models to the thermal solver module of Siemens NX. The paper describes the numerical tools used for the image quantification and the transformation and illustrates them on several thermal simulations of fibre reinforced insulation blankets filled with low thermal conductive fillers. The calculation of thermal conductivity is validated by comparison with the experimental data.

Keywords: analysis, modelling, thermal, voxel

Procedia PDF Downloads 286
28479 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait

Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh

Abstract:

In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.

Keywords: innovative methods in transportation data collection, integrated public transportation system, traffic forecasts, transportation modeling, travel behavior

Procedia PDF Downloads 220
28478 Models of Innovation Processes and Their Evolution: A Literature Review

Authors: Maier Dorin, Maier Andreea

Abstract:

Today, any organization - regardless of the specific activity - must be prepared to face continuous radical changes, innovation thus becoming a condition of survival in a globalized market. Not all managers have an overall view on the real size of necessary innovation potential. Unfortunately there is still no common (and correct) understanding of the term of innovation among managers. Moreover, not all managers are aware of the need for innovation. This article highlights and analyzes a series of models of innovation processes and their evolution. The models analyzed encompass both the strategic level and the operational one within an organization, indicating performance innovation on each landing. As the literature review shows, there are no easy answers to the innovation process as there are no shortcuts to great results. Successful companies do not have a silver innovative bullet - they do not get results by making one or few things better than others, they make everything better.

Keywords: innovation, innovation process, business success, models of innovation

Procedia PDF Downloads 397
28477 Towards Efficient Reasoning about Families of Class Diagrams Using Union Models

Authors: Tejush Badal, Sanaa Alwidian

Abstract:

Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time.

Keywords: analysis, class diagram, model family, unified modeling language, union model

Procedia PDF Downloads 68
28476 Applying Business Model Patterns: A Case Study in Latin American Building Industry

Authors: James Alberto Ortega Morales, Nelson Andrés Martínez Marín

Abstract:

The bulding industry is one of the most important sectors all around the world in terms of contribution to index like GDP and labor. On the other hand, it is a major contributor to Greenhouse Gases (GHG) and waste generation contributing to global warming. In this sense, it is necessary to establish sustainable practices both from the strategic point of view to the operations point of view as well in all business and industries. Business models don’t scape to this reality attending it´s mediator role between strategy and operations. Business models can turn from the traditional practices searching economic benefits to sustainable bussines models that generate both economic value and value for society and the environment. Recent advances in the analysis of sustainable business models find different classifications that allow finding potential triple bottom line (economic, social and environmental) solutions applicable in every business sector. Into the metioned Advances have been identified, 11 groups and 45 patterns of sustainable business models have been identified; such patterns can be found either in the business models as a whole or found concurrently in their components. This article presents the analysis of a case study, seeking to identify the components and elements that are part of it, using the ECO CANVAS conceptual model. The case study allows showing the concurrent existence of different patterns of business models for sustainability empirically, serving as an example and inspiration for other Latin American companies interested in integrating sustainability into their new and existing business models.

Keywords: sustainable business models, business sustainability, business model patterns, case study, construction industry

Procedia PDF Downloads 111
28475 Volatility Model with Markov Regime Switching to Forecast Baht/USD

Authors: Nop Sopipan

Abstract:

In this paper, we forecast the volatility of Baht/USDs using Markov Regime Switching GARCH (MRS-GARCH) models. These models allow volatility to have different dynamics according to unobserved regime variables. The main purpose of this paper is to find out whether MRS-GARCH models are an improvement on the GARCH type models in terms of modeling and forecasting Baht/USD volatility. The MRS-GARCH is the best performance model for Baht/USD volatility in short term but the GARCH model is best perform for long term.

Keywords: volatility, Markov Regime Switching, forecasting, Baht/USD

Procedia PDF Downloads 297