Search results for: watershed models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6694

Search results for: watershed models

6034 Reclamation of Mining Using Vegetation - A Comparative Study of Open Pit Mining

Authors: G. Surendra Babu

Abstract:

We all know the importance of mineral wealth, which has been buried inside the layers of the earth for decades. These are the natural energy sources that are used in our day to day life like fuel, electricity, construction, etc. but the process of extraction causes damage to the nature that can’t be returned back and which are left over after completion of mining we can see these are barren from decades these remain unused degraded land. Most of them are covered with vegetation before the start during mining which damages the native vegetation of the region and disturbs the watershed boundary of the regions and it also disturbs the biodiversity of the reign. The major motto of the study is to understand the various issues that are found and to understand various methods of reclamations process that are suitable for revegetating and also variously practiced which are carried out in the different case studies and government guidelines procedure of lease licenses which includes the environmental clearances and also to study the vegetation pattern according to the major issues identified. And finally suggesting the new guidelines with respect to the old guidelines which helps in the revegetation of the mine-sites which helps in establishing of its own sustainable ecosystem in future.

Keywords: reclamation, open-pit mining, revegetation, reclamation methods

Procedia PDF Downloads 168
6033 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 438
6032 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 86
6031 Simulation to Detect Virtual Fractional Flow Reserve in Coronary Artery Idealized Models

Authors: Nabila Jaman, K. E. Hoque, S. Sawall, M. Ferdows

Abstract:

Coronary artery disease (CAD) is one of the most lethal diseases of the cardiovascular diseases. Coronary arteries stenosis and bifurcation angles closely interact for myocardial infarction. We want to use computer-aided design model coupled with computational hemodynamics (CHD) simulation for detecting several types of coronary artery stenosis with different locations in an idealized model for identifying virtual fractional flow reserve (vFFR). The vFFR provides us the information about the severity of stenosis in the computational models. Another goal is that we want to imitate patient-specific computed tomography coronary artery angiography model for constructing our idealized models with different left anterior descending (LAD) and left circumflex (LCx) bifurcation angles. Further, we want to analyze whether the bifurcation angles has an impact on the creation of narrowness in coronary arteries or not. The numerical simulation provides the CHD parameters such as wall shear stress (WSS), velocity magnitude and pressure gradient (PGD) that allow us the information of stenosis condition in the computational domain.

Keywords: CAD, CHD, vFFR, bifurcation angles, coronary stenosis

Procedia PDF Downloads 142
6030 ‘Non-Legitimate’ Voices as L2 Models: Towards Becoming a Legitimate L2 Speaker

Authors: M. Rilliard

Abstract:

Based on a Multiliteracies-inspired and sociolinguistically-informed advanced French composition class, this study employed autobiographical narratives from speakers traditionally considered non-legitimate models for L2 teaching purposes of inspiring students to develop an authentic L2 voice and to see themselves as legitimate L2 speakers. Students explored their L2 identities in French through a self-inspired fictional character. Two autobiographical narratives of identity quest by non-traditional French speakers provided them guidance through this process: the novel Le Bleu des Abeilles (2013) and the film Qu’Allah Bénisse la France (2014). Written and French oral productions for different genres, as well as metalinguistic reflections in English, were collected and analyzed. Results indicate that ideas and materials that were relatable to students, namely relatable experiences and relatable language, were most useful to them in developing their L2 voices and achieving authentic and legitimate L2 speakership. These results point towards the benefits of using non-traditional speakers as pedagogical models, as they serve to legitimize students’ sense of their own L2-speakership, which ultimately leads them towards a better, more informed, mastery of the language.

Keywords: foreign language classroom, L2 identity, L2 learning and teaching, L2 writing, sociolinguistics

Procedia PDF Downloads 117
6029 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria

Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter

Abstract:

Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.

Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis

Procedia PDF Downloads 58
6028 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal

Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota

Abstract:

This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.

Keywords: temperature, precipitation, water discharge, water balance, global warming

Procedia PDF Downloads 323
6027 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation

Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang

Abstract:

In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.

Keywords: building energy model, simulation, geometric simplification, design, regression

Procedia PDF Downloads 164
6026 On Hyperbolic Gompertz Growth Model (HGGM)

Authors: S. O. Oyamakin, A. U. Chukwu,

Abstract:

We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.

Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz

Procedia PDF Downloads 426
6025 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions

Authors: Timothy Kayode Samson, Adedoyin Isola Lawal

Abstract:

The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.

Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH

Procedia PDF Downloads 81
6024 Self-Supervised Pretraining on Sequences of Functional Magnetic Resonance Imaging Data for Transfer Learning to Brain Decoding Tasks

Authors: Sean Paulsen, Michael Casey

Abstract:

In this work we present a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening. Our pretraining results are the first to suggest a synergistic effect of multitask training on fMRI data. Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task. We observe significantly improved accuracy on held-out runs with the finetuned models, which demonstrates the ability of our pretraining tasks to facilitate transfer learning. This work contributes to the growing body of literature on transformer architectures for pretraining and transfer learning with fMRI data, and serves as a proof of concept for our pretraining tasks and multitask pretraining on fMRI data.

Keywords: transfer learning, fMRI, self-supervised, brain decoding, transformer, multitask training

Procedia PDF Downloads 70
6023 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece

Authors: Panagiotis Karadimos, Leonidas Anthopoulos

Abstract:

Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.

Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA

Procedia PDF Downloads 117
6022 A Numerical Study on the Influence of CO2 Dilution on Combustion Characteristics of a Turbulent Diffusion Flame

Authors: Yasaman Tohidi, Rouzbeh Riazi, Shidvash Vakilipour, Masoud Mohammadi

Abstract:

The objective of the present study is to numerically investigate the effect of CO2 replacement of N2 in air stream on the flame characteristics of the CH4 turbulent diffusion flame. The Open source Field Operation and Manipulation (OpenFOAM) has been used as the computational tool. In this regard, laminar flamelet and modified k-ε models have been utilized as combustion and turbulence models, respectively. Results reveal that the presence of CO2 in air stream changes the flame shape and maximum flame temperature. Also, CO2 dilution causes an increment in CO mass fraction.

Keywords: CH4 diffusion flame, CO2 dilution, OpenFOAM, turbulent flame

Procedia PDF Downloads 259
6021 Effect of Soil Corrosion in Failures of Buried Gas Pipelines

Authors: Saima Ali, Pathamanathan Rajeev, Imteaz A. Monzur

Abstract:

In this paper, a brief review of the corrosion mechanism in buried pipe and modes of failure is provided together with the available corrosion models. Moreover, the sensitivity analysis is performed to understand the influence of corrosion model parameters on the remaining life estimation. Further, the probabilistic analysis is performed to propagate the uncertainty in the corrosion model on the estimation of the renaming life of the pipe. Finally, the comparison among the corrosion models on the basis of the remaining life estimation will be provided to improve the renewal plan.

Keywords: corrosion, pit depth, sensitivity analysis, exposure period

Procedia PDF Downloads 504
6020 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs

Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle

Abstract:

Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.

Keywords: meteorological data, Washington D.C., DCNet data, NAM model

Procedia PDF Downloads 215
6019 Assessment of Sex Differences in Serum Urea and Creatinine Level in Response to Spinal Cord Injury Using Albino Rat Models

Authors: Waziri B. I., Elkhashab M. M.

Abstract:

Background: One of the most serious consequences of spinal cord injury (SCI) is progressive deterioration of renal function mostly as a result of urine stasis and ascending infection of the paralyzed bladder. This necessitates for investigation of early changes in serum urea and creatinine and associated sex related differences in response to SCI. Methods: A total of 24 adult albino rats weighing above 150g were divided equally into two groups, a control and experimental group (n = 12) each containing an equal number of male and female rats. The experimental group animals were paralyzed by complete transection of spinal cord below T4 level after deep anesthesia with ketamine 75mg/kg. Blood samples were collected from both groups five days post SCI for analysis. Mean values of serum urea (mmol/L) and creatinine (µmol/L) for both groups were compared. P < 0.05 was considered as significant. Results: The results showed significantly higher levels (P < 0.05) of serum urea and creatinine in the male SCI models with mean values of 92.12 ± 0.98 and 2573 ± 70.97 respectively compared with their controls where the mean values for serum urea and creatinine were 6.31 ± 1.48 and 476. 95 ± 4.67 respectively. In the female SCI models, serum urea 13.11 ± 0.81 and creatinine 519.88 ± 31.13 were not significantly different from that of female controls with serum urea and creatinine levels of 11.71 ± 1.43 and 493.69 ± 17.10 respectively (P > 0.05). Conclusion: Spinal cord injury caused a significant increase in serum Urea and Creatinine levels in the male models compared to the females. This indicated that males might have higher risk of renal dysfunction following SCI.

Keywords: albino rats, creatinine, spinal cord injury (SCI), urea

Procedia PDF Downloads 119
6018 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 131
6017 Investigating Knowledge Management in Financial Organisation: Proposing a New Model for Implementing Knowledge Management

Authors: Ziba R. Tehrani, Sanaz Moayer

Abstract:

In the age of the knowledge-based economy, knowledge management has become a key factor in sustainable competitive advantage. Knowledge management is discovering, acquiring, developing, sharing, maintaining, evaluating, and using right knowledge in right time by right person in organization; which is accomplished by creating a right link between human resources, information technology, and appropriate structure, to achieve organisational goals. Studying knowledge management financial institutes shows the knowledge management in banking system is not different from other industries but because of complexity of bank’s environment, the implementation is more difficult. The bank managers found out that implementation of knowledge management will bring many advantages to financial institutes, one of the most important of which is reduction of threat to lose subsequent information of personnel job quit. Also Special attention to internal conditions and environment of the financial institutes and avoidance from copy-making in designing the knowledge management is a critical issue. In this paper, it is tried first to define knowledge management concept and introduce existing models of knowledge management; then some of the most important models which have more similarities with other models will be reviewed. In second step according to bank requirements with focus on knowledge management approach, most major objectives of knowledge management are identified. For gathering data in this stage face to face interview is used. Thirdly these specified objectives are analysed with the response of distribution of questionnaire which is gained through managers and expert staffs of ‘Karafarin Bank’. Finally based on analysed data, some features of exiting models are selected and a new conceptual model will be proposed.

Keywords: knowledge management, financial institute, knowledge management model, organisational knowledge

Procedia PDF Downloads 343
6016 Cross-Dialect Sentence Transformation: A Comparative Analysis of Language Models for Adapting Sentences to British English

Authors: Shashwat Mookherjee, Shruti Dutta

Abstract:

This study explores linguistic distinctions among American, Indian, and Irish English dialects and assesses various Language Models (LLMs) in their ability to generate British English translations from these dialects. Using cosine similarity analysis, the study measures the linguistic proximity between original British English translations and those produced by LLMs for each dialect. The findings reveal that Indian and Irish English translations maintain notably high similarity scores, suggesting strong linguistic alignment with British English. In contrast, American English exhibits slightly lower similarity, reflecting its distinct linguistic traits. Additionally, the choice of LLM significantly impacts translation quality, with Llama-2-70b consistently demonstrating superior performance. The study underscores the importance of selecting the right model for dialect translation, emphasizing the role of linguistic expertise and contextual understanding in achieving accurate translations.

Keywords: cross-dialect translation, language models, linguistic similarity, multilingual NLP

Procedia PDF Downloads 41
6015 Empirical Model for the Estimation of Global Solar Radiation on Horizontal Surface in Algeria

Authors: Malika Fekih, Abdenour Bourabaa, Rafika Hariti, Mohamed Saighi

Abstract:

In Algeria the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Empirical constants for these models have been estimated and the results obtained have been tested statistically. The results show encouraging agreement between estimated and measured values.

Keywords: global solar radiation, empirical model, semi arid areas, climatological parameters

Procedia PDF Downloads 479
6014 Coarse-Grained Molecular Simulations to Estimate Thermophysical Properties of Phase Equilibria

Authors: Hai Hoang, Thanh Xuan Nguyen Thi, Guillaume Galliero

Abstract:

Coarse-Grained (CG) molecular simulations have shown to be an efficient way to estimate thermophysical (static and dynamic) properties of fluids. Several strategies have been developed and reported in the literature for defining CG molecular models. Among them, those based on a top-down strategy (i.e. CG molecular models related to macroscopic observables), despite being heuristic, have increasingly gained attention. This is probably due to its simplicity in implementation and its ability to provide reasonable results for not only simple but also complex systems. Regarding simple Force-Fields associated with these CG molecular models, it has been found that the four parameters Mie chain model is one of the best compromises to describe thermophysical static properties (e.g. phase diagram, saturation pressure). However, parameterization procedures of these Mie-chain GC molecular models given in literature are generally insufficient to simultaneously provide static and dynamic (e.g. viscosity) properties. To deal with such situations, we have extended the corresponding states by using a quantity associated with the liquid viscosity. Results obtained from molecular simulations have shown that our approach is able to yield good estimates for both static and dynamic thermophysical properties for various real non-associating fluids. In addition, we will show that on simple (e.g. phase diagram, saturation pressure) and complex (e.g. thermodynamic response functions, thermodynamic energy potentials) static properties, results of our scheme generally provides improved results compared to existing approaches.

Keywords: coarse-grained model, mie potential, molecular simulations, thermophysical properties, phase equilibria

Procedia PDF Downloads 320
6013 Comprehensive Experimental Study to Determine Energy Dissipation of Nappe Flows on Stepped Chutes

Authors: Abdollah Ghasempour, Mohammad Reza Kavianpour, Majid Galoie

Abstract:

This study has investigated the fundamental parameters which have effective role on energy dissipation of nappe flows on stepped chutes in order to estimate an empirical relationship using dimensional analysis. To gain this goal, comprehensive experimental study on some large-scale physical models with various step geometries, slopes, discharges, etc. were carried out. For all models, hydraulic parameters such as velocity, pressure, water depth, flow regime and etc. were measured precisely. The effective parameters, then, could be determined by analysis of experimental data. Finally, a dimensional analysis was done in order to estimate an empirical relationship for evaluation of energy dissipation of nappe flows on stepped chutes. Because of using the large-scale physical models in this study, the empirical relationship is in very good agreement with the experimental results.

Keywords: nappe flow, energy dissipation, stepped chute, dimensional analysis

Procedia PDF Downloads 343
6012 Model Driven Architecture Methodologies: A Review

Authors: Arslan Murtaza

Abstract:

Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.

Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies

Procedia PDF Downloads 438
6011 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait

Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh

Abstract:

In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.

Keywords: innovative methods in transportation data collection, integrated public transportation system, traffic forecasts, transportation modeling, travel behavior

Procedia PDF Downloads 203
6010 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments

Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora

Abstract:

Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.

Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver

Procedia PDF Downloads 300
6009 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 136
6008 Coupling Fuzzy Analytic Hierarchy Process with Storm Water Management Model for Site Selection of Appropriate Adaptive Measures

Authors: Negin Binesh, Mohammad Hossein Niksokhan, Amin Sarang

Abstract:

Best Management Practices (BMPs) are considered as one of the most important structural adaptive measures to climate change and urban development challenges in recent decades. However, not every location is appropriate for applying BMPs in the watersheds. In this paper, location prioritization of two kinds of BMPs was done: Pourous pavement and Detention pond. West Flood-Diversion (WFD) catchment in northern parts of Tehran, Iran, was considered as the case study. The methodology includes integrating the results of Storm Water Management Model (SWMM) into Fuzzy Analytic Hierarchy Process (FAHP) method using Geographic Information System (GIS). The results indicate that mostly suburban areas of the watershed in northern parts are appropriate for applying detention basin, and downstream high-density urban areas are more suitable for using permeable pavement.

Keywords: adaptive measures, BMPs, location prioritization, urban flooding

Procedia PDF Downloads 345
6007 Multi-Criteria Decision Support System for Modeling of Civic Facilities Using GIS Applications: A Case Study of F-11, Islamabad

Authors: Asma Shaheen Hashmi, Omer Riaz, Khalid Mahmood, Fahad Ullah, Tanveer Ahmad

Abstract:

The urban landscapes are being change with the population growth and advancements in new technologies. The urban sprawl pattern and utilizes are related to the local socioeconomic and physical condition. Urban policy decisions are executed mostly through spatial planning. A decision support system (DSS) is very powerful tool which provides flexible knowledge base method for urban planning. An application was developed using geographical information system (GIS) for urban planning. A scenario based DSS was developed to integrate the hierarchical muti-criteria data of different aspects of urban landscape. These were physical environment, the dumping site, spatial distribution of road network, gas and water supply lines, and urban watershed management, selection criteria for new residential, recreational, commercial and industrial sites. The model provided a framework to incorporate the sustainable future development. The data can be entered dynamically by planners according to the appropriate criteria for the management of urban landscapes.

Keywords: urban, GIS, spatial, criteria

Procedia PDF Downloads 615
6006 Conservation Challenges of Fish and Fisheries in Lake Tana, Ethiopia

Authors: Shewit Kidane, Abebe Getahun, Wassie Anteneh, Admassu Demeke, Peter Goethals

Abstract:

We have reviewed major findings of scientific studies on Lake Tana fish resources and their threats. The aim was to provide summarized information for all concerned bodies and international readers to get full and comprehensive picture about the lake’s fish resource and conservation problems. The Lake Tana watershed comprise 28 fish species, of which 21 are endemic. Moreover, Lake Tana is the one among the top 250 lake regions of global importance for biodiversity and it is world recognized migratory birds wintering site. Lake Tana together with its adjacent wetlands provide directly and indirectly a livelihood for more than 500,000 people. However, owing to anthropogenic activities, the lake ecosystem as well as fish and attributes of the fisheries sector are severely degraded. Fish species in Lake Tana are suffering due to illegal fishing, damming, habitat/breeding ground degradation, wastewater disposal, introduction of exotic species, and lack of implementing fisheries regulations. Currently, more than 98% of fishers in Lake Tana are using the most destructive monofilament. Indeed, dams, irrigation schemes and hydropower are constructed in response to the emerging development need only. Mitigation techniques such as construction of fish ladders for the migratory fishes are the most forgotten. In addition, water resource developers are likely unaware of both the importance of the fisheries and the impact of dam construction on fish. As a result, the biodiversity issue is often missed. Besides, Lake Tana wetlands, which play vital role to sustain biodiversity, are not wisely utilised in the sense of the Ramsar Convention’s definition. Wetlands are considered as unhealthy and hence wetland conversion for the purpose of recession agriculture is still seen as advanced mode of development. As a result, many wetlands in the lake watershed are shrinking drastically over time and Cyprus papyrus, one of the characteristic features of Lake Tana, has dramatically declined in its distribution with some local extinction. Furthermore, the recently introduced water hyacinth (Eichhornia crassipes) is creating immense problems on the lake ecosystem. Moreover, currently, 1.56 million tons of sediment have deposited into the lake each year and wastes from the industries and residents are directly discharged into the lake without treatment. Recently, sign of eutrophication is revealed in Lake Tana and most coarsely, the incidence of cyanobacteria genus Microcystis was reported from the Bahir Dar Gulf of Lake Tana. Thus, the direct dependency of the communities on the lake water for drinking as well as to wash their body and clothes and its fisheries make the problem worst. Indeed, since it is home to many endemic migratory fish, such kind of unregulated developmental activities could be detrimental to their stocks. This can be best illustrated by the drastic stock reduction (>75% in biomass) of the world unique Labeobarbus species. So, unless proper management is put in place, the anthropogenic impacts can jeopardize the aquatic ecosystems. Therefore, in order to sustainably use the aquatic resources and fulfil the needs of the local people, every developmental activity and resource utilization should be carried out adhering to the available policies.

Keywords: anthropogenic impacts, dams, endemic fish, wetland degradation

Procedia PDF Downloads 222
6005 Relation between Physical and Mechanical Properties of Concrete Paving Stones Using Neuro-Fuzzy Approach

Authors: Erion Luga, Aksel Seitllari, Kemal Pervanqe

Abstract:

This study investigates the relation between physical and mechanical properties of concrete paving stones using neuro-fuzzy approach. For this purpose 200 samples of concrete paving stones were selected randomly from different sources. The first phase included the determination of physical properties of the samples such as water absorption capacity, porosity and unit weight. After that the indirect tensile strength test and compressive strength test of the samples were performed. İn the second phase, adaptive neuro-fuzzy approach was employed to simulate nonlinear mapping between the above mentioned physical properties and mechanical properties of paving stones. The neuro-fuzzy models uses Sugeno type fuzzy inference system. The models parameters were adapted using hybrid learning algorithm and input space was fuzzyfied by considering grid partitioning. It is concluded based on the observed data and the estimated data through ANFIS models that neuro-fuzzy system exhibits a satisfactory performance.

Keywords: paving stones, physical properties, mechanical properties, ANFIS

Procedia PDF Downloads 317