Search results for: transferability of models
6588 Automatic Calibration of Agent-Based Models Using Deep Neural Networks
Authors: Sima Najafzadehkhoei, George Vega Yon
Abstract:
This paper presents an approach for calibrating Agent-Based Models (ABMs) efficiently, utilizing Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. These machine learning techniques are applied to Susceptible-Infected-Recovered (SIR) models, which are a core framework in the study of epidemiology. Our method replicates parameter values from observed trajectory curves, enhancing the accuracy of predictions when compared to traditional calibration techniques. Through the use of simulated data, we train the models to predict epidemiological parameters more accurately. Two primary approaches were explored: one where the number of susceptible, infected, and recovered individuals is fully known, and another using only the number of infected individuals. Our method shows promise for application in other ABMs where calibration is computationally intensive and expensive.Keywords: ABM, calibration, CNN, LSTM, epidemiology
Procedia PDF Downloads 246587 Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models
Authors: Aditya Karade, Sharada Falane, Dhananjay Deshmukh, Vijaykumar Mantri
Abstract:
Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%.Keywords: brain tumour, MRI image, detecting and classifying tumour, pre-trained models, transfer learning, image segmentation, data augmentation
Procedia PDF Downloads 746586 Continuum-Based Modelling Approaches for Cell Mechanics
Authors: Yogesh D. Bansod, Jiri Bursa
Abstract:
The quantitative study of cell mechanics is of paramount interest since it regulates the behavior of the living cells in response to the myriad of extracellular and intracellular mechanical stimuli. The novel experimental techniques together with robust computational approaches have given rise to new theories and models, which describe cell mechanics as a combination of biomechanical and biochemical processes. This review paper encapsulates the existing continuum-based computational approaches that have been developed for interpreting the mechanical responses of living cells under different loading and boundary conditions. The salient features and drawbacks of each model are discussed from both structural and biological points of view. This discussion can contribute to the development of even more precise and realistic computational models of cell mechanics based on continuum approaches or on their combination with microstructural approaches, which in turn may provide a better understanding of mechanotransduction in living cells.Keywords: cell mechanics, computational models, continuum approach, mechanical models
Procedia PDF Downloads 3636585 Evaluation and Compression of Different Language Transformer Models for Semantic Textual Similarity Binary Task Using Minority Language Resources
Authors: Ma. Gracia Corazon Cayanan, Kai Yuen Cheong, Li Sha
Abstract:
Training a language model for a minority language has been a challenging task. The lack of available corpora to train and fine-tune state-of-the-art language models is still a challenge in the area of Natural Language Processing (NLP). Moreover, the need for high computational resources and bulk data limit the attainment of this task. In this paper, we presented the following contributions: (1) we introduce and used a translation pair set of Tagalog and English (TL-EN) in pre-training a language model to a minority language resource; (2) we fine-tuned and evaluated top-ranking and pre-trained semantic textual similarity binary task (STSB) models, to both TL-EN and STS dataset pairs. (3) then, we reduced the size of the model to offset the need for high computational resources. Based on our results, the models that were pre-trained to translation pairs and STS pairs can perform well for STSB task. Also, having it reduced to a smaller dimension has no negative effect on the performance but rather has a notable increase on the similarity scores. Moreover, models that were pre-trained to a similar dataset have a tremendous effect on the model’s performance scores.Keywords: semantic matching, semantic textual similarity binary task, low resource minority language, fine-tuning, dimension reduction, transformer models
Procedia PDF Downloads 2116584 A Comparative Analysis of ARIMA and Threshold Autoregressive Models on Exchange Rate
Authors: Diteboho Xaba, Kolentino Mpeta, Tlotliso Qejoe
Abstract:
This paper assesses the in-sample forecasting of the South African exchange rates comparing a linear ARIMA model and a SETAR model. The study uses a monthly adjusted data of South African exchange rates with 420 observations. Akaike information criterion (AIC) and the Schwarz information criteria (SIC) are used for model selection. Mean absolute error (MAE), root mean squared error (RMSE) and mean absolute percentage error (MAPE) are error metrics used to evaluate forecast capability of the models. The Diebold –Mariano (DM) test is employed in the study to check forecast accuracy in order to distinguish the forecasting performance between the two models (ARIMA and SETAR). The results indicate that both models perform well when modelling and forecasting the exchange rates, but SETAR seemed to outperform ARIMA.Keywords: ARIMA, error metrices, model selection, SETAR
Procedia PDF Downloads 2446583 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data
Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar
Abstract:
It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.Keywords: accuracy, exponential smoothing, forecasting, initial value
Procedia PDF Downloads 1776582 Advancing Communication Theory in the Age of Digital Technology: Bridging the Gap Between Traditional Models and Emerging Platforms
Authors: Sidique Fofanah
Abstract:
This paper explores the intersection of traditional communication theories and modern digital technologies, analyzing how established models adapt to contemporary communication platforms. It examines the evolving nature of interpersonal, group, and mass communication within digital environments, emphasizing the role of social media, AI-driven communication tools, and virtual reality in reshaping communication paradigms. The paper also discusses the implications for future research and practice in communication studies, proposing an integrated framework that accommodates both classical and emerging theories.Keywords: communication, traditional models, emerging platforms, digital media
Procedia PDF Downloads 256581 Mathematical Modeling of Carotenoids and Polyphenols Content of Faba Beans (Vicia faba L.) during Microwave Treatments
Authors: Ridha Fethi Mechlouch, Ahlem Ayadi, Ammar Ben Brahim
Abstract:
Given the importance of the preservation of polyphenols and carotenoids during thermal processing, we attempted in this study to investigate the variation of these two parameters in faba beans during microwave treatment using different power densities (1; 2; and 3W/g), then to perform a mathematical modeling by using non-linear regression analysis to evaluate the models constants. The variation of the carotenoids and polyphenols ratio of faba beans and the models are tested to validate the experimental results. Exponential models were found to be suitable to describe the variation of caratenoid ratio (R²= 0.945, 0.927 and 0.946) for power densities (1; 2; and 3W/g) respectively, and polyphenol ratio (R²= 0.931, 0.989 and 0.982) for power densities (1; 2; and 3W/g) respectively. The effect of microwave power density Pd(W/g) on the coefficient k of models were also investigated. The coefficient is highly correlated (R² = 1) and can be expressed as a polynomial function.Keywords: microwave treatment, power density, carotenoid, polyphenol, modeling
Procedia PDF Downloads 2596580 Exchange Rate Forecasting by Econometric Models
Authors: Zahid Ahmad, Nosheen Imran, Nauman Ali, Farah Amir
Abstract:
The objective of the study is to forecast the US Dollar and Pak Rupee exchange rate by using time series models. For this purpose, daily exchange rates of US and Pakistan for the period of January 01, 2007 - June 2, 2017, are employed. The data set is divided into in sample and out of sample data set where in-sample data are used to estimate as well as forecast the models, whereas out-of-sample data set is exercised to forecast the exchange rate. The ADF test and PP test are used to make the time series stationary. To forecast the exchange rate ARIMA model and GARCH model are applied. Among the different Autoregressive Integrated Moving Average (ARIMA) models best model is selected on the basis of selection criteria. Due to the volatility clustering and ARCH effect the GARCH (1, 1) is also applied. Results of analysis showed that ARIMA (0, 1, 1 ) and GARCH (1, 1) are the most suitable models to forecast the future exchange rate. Further the GARCH (1,1) model provided the volatility with non-constant conditional variance in the exchange rate with good forecasting performance. This study is very useful for researchers, policymakers, and businesses for making decisions through accurate and timely forecasting of the exchange rate and helps them in devising their policies.Keywords: exchange rate, ARIMA, GARCH, PAK/USD
Procedia PDF Downloads 5616579 Study on Flexible Diaphragm In-Plane Model of Irregular Multi-Storey Industrial Plant
Authors: Cheng-Hao Jiang, Mu-Xuan Tao
Abstract:
The rigid diaphragm model may cause errors in the calculation of internal forces due to neglecting the in-plane deformation of the diaphragm. This paper thus studies the effects of different diaphragm in-plane models (including in-plane rigid model and in-plane flexible model) on the seismic performance of structures. Taking an actual industrial plant as an example, the seismic performance of the structure is predicted using different floor diaphragm models, and the analysis errors caused by different diaphragm in-plane models including deformation error and internal force error are calculated. Furthermore, the influence of the aspect ratio on the analysis errors is investigated. Finally, the code rationality is evaluated by assessing the analysis errors of the structure models whose floors were determined as rigid according to the code’s criterion. It is found that different floor models may cause great differences in the distribution of structural internal forces, and the current code may underestimate the influence of the floor in-plane effect.Keywords: industrial plant, diaphragm, calculating error, code rationality
Procedia PDF Downloads 1406578 Probing Language Models for Multiple Linguistic Information
Authors: Bowen Ding, Yihao Kuang
Abstract:
In recent years, large-scale pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. The word vectors produced by these language models can be viewed as dense encoded presentations of natural language that in text form. However, it is unknown how much linguistic information is encoded and how. In this paper, we construct several corresponding probing tasks for multiple linguistic information to clarify the encoding capabilities of different language models and performed a visual display. We firstly obtain word presentations in vector form from different language models, including BERT, ELMo, RoBERTa and GPT. Classifiers with a small scale of parameters and unsupervised tasks are then applied on these word vectors to discriminate their capability to encode corresponding linguistic information. The constructed probe tasks contain both semantic and syntactic aspects. The semantic aspect includes the ability of the model to understand semantic entities such as numbers, time, and characters, and the grammatical aspect includes the ability of the language model to understand grammatical structures such as dependency relationships and reference relationships. We also compare encoding capabilities of different layers in the same language model to infer how linguistic information is encoded in the model.Keywords: language models, probing task, text presentation, linguistic information
Procedia PDF Downloads 1106577 Application Difference between Cox and Logistic Regression Models
Authors: Idrissa Kayijuka
Abstract:
The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio
Procedia PDF Downloads 4556576 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.Keywords: chimney, deterministic model, van der pol, vortex-induced vibration
Procedia PDF Downloads 2216575 Analysis of Moving Loads on Bridges Using Surrogate Models
Authors: Susmita Panda, Arnab Banerjee, Ajinkya Baxy, Bappaditya Manna
Abstract:
The design of short to medium-span high-speed bridges in critical locations is an essential aspect of vehicle-bridge interaction. Due to dynamic interaction between moving load and bridge, mathematical models or finite element modeling computations become time-consuming. Thus, to reduce the computational effort, a universal approximator using an artificial neural network (ANN) has been used to evaluate the dynamic response of the bridge. The data set generation and training of surrogate models have been conducted over the results obtained from mathematical modeling. Further, the robustness of the surrogate model has been investigated, which showed an error percentage of less than 10% with conventional methods. Additionally, the dependency of the dynamic response of the bridge on various load and bridge parameters has been highlighted through a parametric study.Keywords: artificial neural network, mode superposition method, moving load analysis, surrogate models
Procedia PDF Downloads 1006574 Applying Multiplicative Weight Update to Skin Cancer Classifiers
Authors: Animish Jain
Abstract:
This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer
Procedia PDF Downloads 796573 Chemometric Estimation of Inhibitory Activity of Benzimidazole Derivatives by Linear Least Squares and Artificial Neural Networks Modelling
Authors: Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević, Lidija R. Jevrić, Stela Jokić
Abstract:
The subject of this paper is to correlate antibacterial behavior of benzimidazole derivatives with their molecular characteristics using chemometric QSAR (Quantitative Structure–Activity Relationships) approach. QSAR analysis has been carried out on the inhibitory activity of benzimidazole derivatives against Staphylococcus aureus. The data were processed by linear least squares (LLS) and artificial neural network (ANN) procedures. The LLS mathematical models have been developed as a calibration models for prediction of the inhibitory activity. The quality of the models was validated by leave one out (LOO) technique and by using external data set. High agreement between experimental and predicted inhibitory acivities indicated the good quality of the derived models. These results are part of the CMST COST Action No. CM1306 "Understanding Movement and Mechanism in Molecular Machines".Keywords: Antibacterial, benzimidazoles, chemometric, QSAR.
Procedia PDF Downloads 3166572 Fusion of MOLA-based DEMs and HiRISE Images for Large-Scale Mars Mapping
Authors: Ahmed F. Elaksher, Islam Omar
Abstract:
In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were then digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. Different transformation models, including the affine and projective transformation models, were used with different sets and distributions of tie points. Additionally, we evaluated the use of the MOLA elevations in co-registering the MOLA and HiRISE datasets. The planimetric RMSEs achieved for each model are reported. Results suggested the use of 3D-2D transformation models.Keywords: photogrammetry, Mars, MOLA, HiRISE
Procedia PDF Downloads 776571 Evaluation of QSRR Models by Sum of Ranking Differences Approach: A Case Study of Prediction of Chromatographic Behavior of Pesticides
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
The present study deals with the selection of the most suitable quantitative structure-retention relationship (QSRR) models which should be used in prediction of the retention behavior of basic, neutral, acidic and phenolic pesticides which belong to different classes: fungicides, herbicides, metabolites, insecticides and plant growth regulators. Sum of ranking differences (SRD) approach can give a different point of view on selection of the most consistent QSRR model. SRD approach can be applied not only for ranking of the QSRR models, but also for detection of similarity or dissimilarity among them. Applying the SRD analysis, the most similar models can be found easily. In this study, selection of the best model was carried out on the basis of the reference ranking (“golden standard”) which was defined as the row average values of logarithm of retention time (logtr) defined by high performance liquid chromatography (HPLC). Also, SRD analysis based on experimental logtr values as reference ranking revealed similar grouping of the established QSRR models already obtained by hierarchical cluster analysis (HCA).Keywords: chemometrics, chromatography, pesticides, sum of ranking differences
Procedia PDF Downloads 3756570 Dual Language Immersion Models in Theory and Practice
Authors: S. Gordon
Abstract:
Dual language immersion is growing fast in language teaching today. This study provides an overview and evaluation of the different models of Dual language immersion programs in US K-12 schools. First, the paper provides a brief current literature review on the theory of Dual Language Immersion (DLI) in Second Language Acquisition (SLA) studies. Second, examples of several types of DLI language teaching models in US K-12 public schools are presented (including 50/50 models, 90/10 models, etc.). Third, we focus on the unique example of DLI education in the state of Utah, a successful, growing program in K-12 schools that includes: French, Chinese, Spanish, and Portuguese. The project investigates the theory and practice particularly of the case of public elementary and secondary school children that study half their school day in the L1 and the other half in the chosen L2, from kindergarten (age 5-6) through high school (age 17-18). Finally, the project takes the observations of Utah French DLI elementary through secondary programs as a case study. To conclude, we look at the principal challenges, pedagogical objectives and outcomes, and important implications for other US states and other countries (such as France currently) that are in the process of developing similar language learning programs.Keywords: dual language immersion, second language acquisition, language teaching, pedagogy, teaching, French
Procedia PDF Downloads 1756569 Fixed-Bed Column Studies of Green Malachite Removal by Use of Alginate-Encapsulated Aluminium Pillared Clay
Authors: Lazhar mouloud, Chemat Zoubida, Ouhoumna Faiza
Abstract:
The main objective of this study, concerns the modeling of breakthrough curves obtained in the adsorption column of malachite green into alginate-encapsulated aluminium pillared clay in fixed bed according to various operating parameters such as the initial concentration, the feed rate and the height fixed bed, applying mathematical models namely: the model of Bohart and Adams, Wolborska, Bed Depth Service Time, Clark and Yoon-Nelson. These models allow us to express the different parameters controlling the performance of the dynamic adsorption system. The results have shown that all models were found suitable for describing the whole or a definite part of the dynamic behavior of the column with respect to the flow rate, the inlet dye concentration and the height of fixed bed.Keywords: adsorption column, malachite green, pillared clays, alginate, modeling, mathematic models, encapsulation.
Procedia PDF Downloads 5086568 An Improvement of a Dynamic Model of the Secondary Sedimentation Tank and Field Validation
Authors: Zahir Bakiri, Saci Nacefa
Abstract:
In this paper a comparison in made between two models, with and without dispersion term, and focused on the characterization of the movement of the sludge blanket in the secondary sedimentation tank using the solid flux theory and the velocity settling. This allowed us develop a one-dimensional models, with and without dispersion based on a thorough experimental study carried out in situ and the application of online data which are the mass load flow, transfer concentration, and influent characteristic. On the other hand, in the proposed model, the new settling velocity law (double-exponential function) used is based on the Vesilind function.Keywords: wastewater, activated sludge, sedimentation, settling velocity, settling models
Procedia PDF Downloads 3886567 Mapping Poverty in the Philippines: Insights from Satellite Data and Spatial Econometrics
Authors: Htet Khaing Lin
Abstract:
This study explores the relationship between a diverse set of variables, encompassing both environmental and socio-economic factors, and poverty levels in the Philippines for the years 2012, 2015, and 2018. Employing Ordinary Least Squares (OLS), Spatial Lag Models (SLM), and Spatial Error Models (SEM), this study delves into the dynamics of key indicators, including daytime and nighttime land surface temperature, cropland surface, urban land surface, rainfall, population size, normalized difference water, vegetation, and drought indices. The findings reveal consistent patterns and unexpected correlations, highlighting the need for nuanced policies that address the multifaceted challenges arising from the interplay of environmental and socio-economic factors.Keywords: poverty analysis, OLS, spatial lag models, spatial error models, Philippines, google earth engine, satellite data, environmental dynamics, socio-economic factors
Procedia PDF Downloads 1016566 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data
Authors: M. A. Meslem
Abstract:
For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.Keywords: quasigeoid, gravity aomalies, covariance, GGM
Procedia PDF Downloads 1376565 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models
Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai
Abstract:
Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.Keywords: plant identification, CNN, image processing, vision transformer, classification
Procedia PDF Downloads 1036564 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models
Authors: A. B. M. Rezaul Islam, Ernur Karadogan
Abstract:
Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis
Procedia PDF Downloads 1446563 Measuring Environmental Efficiency of Energy in OPEC Countries
Authors: Bahram Fathi, Seyedhossein Sajadifar, Naser Khiabani
Abstract:
Data envelopment analysis (DEA) has recently gained popularity in energy efficiency analysis. A common feature of the previously proposed DEA models for measuring energy efficiency performance is that they treat energy consumption as an input within a production framework without considering undesirable outputs. However, energy use results in the generation of undesirable outputs as byproducts of producing desirable outputs. Within a joint production framework of both desirable and undesirable outputs, this paper presents several DEA-type linear programming models for measuring energy efficiency performance. In addition to considering undesirable outputs, our models treat different energy sources as different inputs so that changes in energy mix could be accounted for in evaluating energy efficiency. The proposed models are applied to measure the energy efficiency performances of 12 OPEC countries and the results obtained are presented.Keywords: energy efficiency, undesirable outputs, data envelopment analysis
Procedia PDF Downloads 7366562 Enhancing Model Interoperability and Reuse by Designing and Developing a Unified Metamodel Standard
Authors: Arash Gharibi
Abstract:
Mankind has always used models to solve problems. Essentially, models are simplified versions of reality, whose need stems from having to deal with complexity; many processes or phenomena are too complex to be described completely. Thus a fundamental model requirement is that it contains the characteristic features that are essential in the context of the problem to be solved or described. Models are used in virtually every scientific domain to deal with various problems. During the recent decades, the number of models has increased exponentially. Publication of models as part of original research has traditionally been in in scientific periodicals, series, monographs, agency reports, national journals and laboratory reports. This makes it difficult for interested groups and communities to stay informed about the state-of-the-art. During the modeling process, many important decisions are made which impact the final form of the model. Without a record of these considerations, the final model remains ill-defined and open to varying interpretations. Unfortunately, the details of these considerations are often lost or in case there is any existing information about a model, it is likely to be written intuitively in different layouts and in different degrees of detail. In order to overcome these issues, different domains have attempted to implement their own approaches to preserve their models’ information in forms of model documentation. The most frequently cited model documentation approaches show that they are domain specific, not to applicable to the existing models and evolutionary flexibility and intrinsic corrections and improvements are not possible with the current approaches. These issues are all because of a lack of unified standards for model documentation. As a way forward, this research will propose a new standard for capturing and managing models’ information in a unified way so that interoperability and reusability of models become possible. This standard will also be evolutionary, meaning members of modeling realm could contribute to its ongoing developments and improvements. In this paper, the current 3 of the most common metamodels are reviewed and according to pros and cons of each, a new metamodel is proposed.Keywords: metamodel, modeling, interoperability, reuse
Procedia PDF Downloads 1986561 Implied Adjusted Volatility by Leland Option Pricing Models: Evidence from Australian Index Options
Authors: Mimi Hafizah Abdullah, Hanani Farhah Harun, Nik Ruzni Nik Idris
Abstract:
With the implied volatility as an important factor in financial decision-making, in particular in option pricing valuation, and also the given fact that the pricing biases of Leland option pricing models and the implied volatility structure for the options are related, this study considers examining the implied adjusted volatility smile patterns and term structures in the S&P/ASX 200 index options using the different Leland option pricing models. The examination of the implied adjusted volatility smiles and term structures in the Australian index options market covers the global financial crisis in the mid-2007. The implied adjusted volatility was found to escalate approximately triple the rate prior the crisis.Keywords: implied adjusted volatility, financial crisis, Leland option pricing models, Australian index options
Procedia PDF Downloads 3796560 Evaluation of Environmental, Technical, and Economic Indicators of a Fused Deposition Modeling Process
Authors: M. Yosofi, S. Ezeddini, A. Ollivier, V. Lavaste, C. Mayousse
Abstract:
Additive manufacturing processes have changed significantly in a wide range of industries and their application progressed from rapid prototyping to production of end-use products. However, their environmental impact is still a rather open question. In order to support the growth of this technology in the industrial sector, environmental aspects should be considered and predictive models may help monitor and reduce the environmental footprint of the processes. This work presents predictive models based on a previously developed methodology for the environmental impact evaluation combined with a technical and economical assessment. Here we applied the methodology to the Fused Deposition Modeling process. First, we present the predictive models relative to different types of machines. Then, we present a decision-making tool designed to identify the optimum manufacturing strategy regarding technical, economic, and environmental criteria.Keywords: additive manufacturing, decision-makings, environmental impact, predictive models
Procedia PDF Downloads 1316559 Genetic Identification of Crop Cultivars Using Barcode System
Authors: Kesavan Markkandan, Ha Young Park, Seung-Il Yoo, Sin-Gi Park, Junhyung Park
Abstract:
For genetic identification of crop cultivars, insertions/deletions (InDel) markers have been preferred currently because they are easy to use, PCR based, co-dominant and relatively abundant. However, new InDels need to be developed for genetic studies of new varieties due to the difference of allele frequencies in InDels among the population groups. These new varieties are evolved with low levels of genetic diversity in specific genome loci with high recombination rate. In this study, we described soybean barcode system approach based on InDel makers, each of which is specific to a variation block (VB), where the genomes split by all assumed recombination sites. Firstly, VBs in crop cultivars were mined for transferability to VB-specific InDel markers. Secondly, putative InDels in the VB regions were identified for the development of barcode system by analyzing particular cultivar’s whole genome data. Thirdly, common VB-specific InDels from all cultivars were selected by gel electrophoresis, which were converted as 2D barcode types according to comparing amplicon polymorphisms in the five cultivars to the reference cultivar. Finally, the polymorphism of the selected markers was assessed with other cultivars, and the barcode system that allows a clear distinction among those cultivars is described. The same approach can be applicable for other commercial crops. Hence, VB-based genetic identification not only minimize the molecular markers but also useful for assessing cultivars and for marker-assisted breeding in other crop species.Keywords: variation block, polymorphism, InDel marker, genetic identification
Procedia PDF Downloads 380