Search results for: panel stochastic frontier models
6796 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 1036795 Multi-Criteria Evolutionary Algorithm to Develop Efficient Schedules for Complex Maintenance Problems
Authors: Sven Tackenberg, Sönke Duckwitz, Andreas Petz, Christopher M. Schlick
Abstract:
This paper introduces an extension to the well-established Resource-Constrained Project Scheduling Problem (RCPSP) to apply it to complex maintenance problems. The problem is to assign technicians to a team which has to process several tasks with multi-level skill requirements during a work shift. Here, several alternative activities for a task allow both, the temporal shift of activities or the reallocation of technicians and tools. As a result, switches from one valid work process variant to another can be considered and may be selected by the developed evolutionary algorithm based on the present skill level of technicians or the available tools. An additional complication of the observed scheduling problem is that the locations of the construction sites are only temporarily accessible during a day. Due to intensive rail traffic, the available time slots for maintenance and repair works are extremely short and are often distributed throughout the day. To identify efficient working periods, a first concept of a Bayesian network is introduced and is integrated into the extended RCPSP with pre-emptive and non-pre-emptive tasks. Thereby, the Bayesian network is used to calculate the probability of a maintenance task to be processed during a specific period of the shift. Focusing on the domain of maintenance of the railway infrastructure in metropolitan areas as the most unproductive implementation process at construction site, the paper illustrates how the extended RCPSP can be applied for maintenance planning support. A multi-criteria evolutionary algorithm with a problem representation is introduced which is capable of revising technician-task allocations, whereas the duration of the task may be stochastic. The approach uses a novel activity list representation to ensure easily describable and modifiable elements which can be converted into detailed shift schedules. Thereby, the main objective is to develop a shift plan which maximizes the utilization of each technician due to a minimization of the waiting times caused by rail traffic. The results of the already implemented core algorithm illustrate a fast convergence towards an optimal team composition for a shift, an efficient sequence of tasks and a high probability of the subsequent implementation due to the stochastic durations of the tasks. In the paper, the algorithm for the extended RCPSP is analyzed in experimental evaluation using real-world example problems with various size, resource complexity, tightness and so forth.Keywords: maintenance management, scheduling, resource constrained project scheduling problem, genetic algorithms
Procedia PDF Downloads 2316794 Simulation to Detect Virtual Fractional Flow Reserve in Coronary Artery Idealized Models
Authors: Nabila Jaman, K. E. Hoque, S. Sawall, M. Ferdows
Abstract:
Coronary artery disease (CAD) is one of the most lethal diseases of the cardiovascular diseases. Coronary arteries stenosis and bifurcation angles closely interact for myocardial infarction. We want to use computer-aided design model coupled with computational hemodynamics (CHD) simulation for detecting several types of coronary artery stenosis with different locations in an idealized model for identifying virtual fractional flow reserve (vFFR). The vFFR provides us the information about the severity of stenosis in the computational models. Another goal is that we want to imitate patient-specific computed tomography coronary artery angiography model for constructing our idealized models with different left anterior descending (LAD) and left circumflex (LCx) bifurcation angles. Further, we want to analyze whether the bifurcation angles has an impact on the creation of narrowness in coronary arteries or not. The numerical simulation provides the CHD parameters such as wall shear stress (WSS), velocity magnitude and pressure gradient (PGD) that allow us the information of stenosis condition in the computational domain.Keywords: CAD, CHD, vFFR, bifurcation angles, coronary stenosis
Procedia PDF Downloads 1576793 Fault Detection and Isolation in Attitude Control Subsystem of Spacecraft Formation Flying Using Extended Kalman Filters
Authors: S. Ghasemi, K. Khorasani
Abstract:
In this paper, the problem of fault detection and isolation in the attitude control subsystem of spacecraft formation flying is considered. In order to design the fault detection method, an extended Kalman filter is utilized which is a nonlinear stochastic state estimation method. Three fault detection architectures, namely, centralized, decentralized, and semi-decentralized are designed based on the extended Kalman filters. Moreover, the residual generation and threshold selection techniques are proposed for these architectures.Keywords: component, formation flight of satellites, extended Kalman filter, fault detection and isolation, actuator fault
Procedia PDF Downloads 4366792 ‘Non-Legitimate’ Voices as L2 Models: Towards Becoming a Legitimate L2 Speaker
Authors: M. Rilliard
Abstract:
Based on a Multiliteracies-inspired and sociolinguistically-informed advanced French composition class, this study employed autobiographical narratives from speakers traditionally considered non-legitimate models for L2 teaching purposes of inspiring students to develop an authentic L2 voice and to see themselves as legitimate L2 speakers. Students explored their L2 identities in French through a self-inspired fictional character. Two autobiographical narratives of identity quest by non-traditional French speakers provided them guidance through this process: the novel Le Bleu des Abeilles (2013) and the film Qu’Allah Bénisse la France (2014). Written and French oral productions for different genres, as well as metalinguistic reflections in English, were collected and analyzed. Results indicate that ideas and materials that were relatable to students, namely relatable experiences and relatable language, were most useful to them in developing their L2 voices and achieving authentic and legitimate L2 speakership. These results point towards the benefits of using non-traditional speakers as pedagogical models, as they serve to legitimize students’ sense of their own L2-speakership, which ultimately leads them towards a better, more informed, mastery of the language.Keywords: foreign language classroom, L2 identity, L2 learning and teaching, L2 writing, sociolinguistics
Procedia PDF Downloads 1336791 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria
Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter
Abstract:
Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis
Procedia PDF Downloads 756790 Production and Leftovers Usage Policies to Minimize Food Waste under Uncertain and Correlated Demand
Authors: Esma Birisci, Ronald McGarvey
Abstract:
One of the common problems in food service industry is demand uncertainty. This research presents a multi-criteria optimization approach to identify the efficient frontier of points lying between the minimum-waste and minimum-shortfall solutions within uncertain demand environment. It also addresses correlation across demands for items (e.g., hamburgers are often demanded with french fries). Reducing overproduction food waste (and its corresponding environmental impacts) and an aversion to shortfalls (leave some customer hungry) need to consider as two contradictory objectives in an all-you-care-to-eat environment food service operation. We identify optimal production adjustments relative to demand forecasts, demand thresholds for utilization of leftovers, and percentages of demand to be satisfied by leftovers, considering two alternative metrics for overproduction waste: mass; and greenhouse gas emissions. Demand uncertainty and demand correlations are addressed using a kernel density estimation approach. A statistical analysis of the changes in decision variable values across each of the efficient frontiers can then be performed to identify the key variables that could be modified to reduce the amount of wasted food at minimal increase in shortfalls. We illustrate our approach with an application to empirical data from Campus Dining Services operations at the University of Missouri.Keywords: environmental studies, food waste, production planning, uncertain and correlated demand
Procedia PDF Downloads 3726789 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation
Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang
Abstract:
In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.Keywords: building energy model, simulation, geometric simplification, design, regression
Procedia PDF Downloads 1806788 Regional Hydrological Extremes Frequency Analysis Based on Statistical and Hydrological Models
Authors: Hadush Kidane Meresa
Abstract:
The hydrological extremes frequency analysis is the foundation for the hydraulic engineering design, flood protection, drought management and water resources management and planning to utilize the available water resource to meet the desired objectives of different organizations and sectors in a country. This spatial variation of the statistical characteristics of the extreme flood and drought events are key practice for regional flood and drought analysis and mitigation management. For different hydro-climate of the regions, where the data set is short, scarcity, poor quality and insufficient, the regionalization methods are applied to transfer at-site data to a region. This study aims in regional high and low flow frequency analysis for Poland River Basins. Due to high frequent occurring of hydrological extremes in the region and rapid water resources development in this basin have caused serious concerns over the flood and drought magnitude and frequencies of the river in Poland. The magnitude and frequency result of high and low flows in the basin is needed for flood and drought planning, management and protection at present and future. Hydrological homogeneous high and low flow regions are formed by the cluster analysis of site characteristics, using the hierarchical and C- mean clustering and PCA method. Statistical tests for regional homogeneity are utilized, by Discordancy and Heterogeneity measure tests. In compliance with results of the tests, the region river basin has been divided into ten homogeneous regions. In this study, frequency analysis of high and low flows using AM for high flow and 7-day minimum low flow series is conducted using six statistical distributions. The use of L-moment and LL-moment method showed a homogeneous region over entire province with Generalized logistic (GLOG), Generalized extreme value (GEV), Pearson type III (P-III), Generalized Pareto (GPAR), Weibull (WEI) and Power (PR) distributions as the regional drought and flood frequency distributions. The 95% percentile and Flow duration curves of 1, 7, 10, 30 days have been plotted for 10 stations. However, the cluster analysis performed two regions in west and east of the province where L-moment and LL-moment method demonstrated the homogeneity of the regions and GLOG and Pearson Type III (PIII) distributions as regional frequency distributions for each region, respectively. The spatial variation and regional frequency distribution of flood and drought characteristics for 10 best catchment from the whole region was selected and beside the main variable (streamflow: high and low) we used variables which are more related to physiographic and drainage characteristics for identify and delineate homogeneous pools and to derive best regression models for ungauged sites. Those are mean annual rainfall, seasonal flow, average slope, NDVI, aspect, flow length, flow direction, maximum soil moisture, elevation, and drainage order. The regional high-flow or low-flow relationship among one streamflow characteristics with (AM or 7-day mean annual low flows) some basin characteristics is developed using Generalized Linear Mixed Model (GLMM) and Generalized Least Square (GLS) regression model, providing a simple and effective method for estimation of flood and drought of desired return periods for ungauged catchments.Keywords: flood , drought, frequency, magnitude, regionalization, stochastic, ungauged, Poland
Procedia PDF Downloads 6026787 On Hyperbolic Gompertz Growth Model (HGGM)
Authors: S. O. Oyamakin, A. U. Chukwu,
Abstract:
We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz
Procedia PDF Downloads 4416786 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions
Authors: Timothy Kayode Samson, Adedoyin Isola Lawal
Abstract:
The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH
Procedia PDF Downloads 1196785 Self-Supervised Pretraining on Sequences of Functional Magnetic Resonance Imaging Data for Transfer Learning to Brain Decoding Tasks
Authors: Sean Paulsen, Michael Casey
Abstract:
In this work we present a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening. Our pretraining results are the first to suggest a synergistic effect of multitask training on fMRI data. Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task. We observe significantly improved accuracy on held-out runs with the finetuned models, which demonstrates the ability of our pretraining tasks to facilitate transfer learning. This work contributes to the growing body of literature on transformer architectures for pretraining and transfer learning with fMRI data, and serves as a proof of concept for our pretraining tasks and multitask pretraining on fMRI data.Keywords: transfer learning, fMRI, self-supervised, brain decoding, transformer, multitask training
Procedia PDF Downloads 906784 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA
Procedia PDF Downloads 1346783 A Numerical Study on the Influence of CO2 Dilution on Combustion Characteristics of a Turbulent Diffusion Flame
Authors: Yasaman Tohidi, Rouzbeh Riazi, Shidvash Vakilipour, Masoud Mohammadi
Abstract:
The objective of the present study is to numerically investigate the effect of CO2 replacement of N2 in air stream on the flame characteristics of the CH4 turbulent diffusion flame. The Open source Field Operation and Manipulation (OpenFOAM) has been used as the computational tool. In this regard, laminar flamelet and modified k-ε models have been utilized as combustion and turbulence models, respectively. Results reveal that the presence of CO2 in air stream changes the flame shape and maximum flame temperature. Also, CO2 dilution causes an increment in CO mass fraction.Keywords: CH4 diffusion flame, CO2 dilution, OpenFOAM, turbulent flame
Procedia PDF Downloads 2766782 Effect of Soil Corrosion in Failures of Buried Gas Pipelines
Authors: Saima Ali, Pathamanathan Rajeev, Imteaz A. Monzur
Abstract:
In this paper, a brief review of the corrosion mechanism in buried pipe and modes of failure is provided together with the available corrosion models. Moreover, the sensitivity analysis is performed to understand the influence of corrosion model parameters on the remaining life estimation. Further, the probabilistic analysis is performed to propagate the uncertainty in the corrosion model on the estimation of the renaming life of the pipe. Finally, the comparison among the corrosion models on the basis of the remaining life estimation will be provided to improve the renewal plan.Keywords: corrosion, pit depth, sensitivity analysis, exposure period
Procedia PDF Downloads 5306781 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs
Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle
Abstract:
Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.Keywords: meteorological data, Washington D.C., DCNet data, NAM model
Procedia PDF Downloads 2346780 Paper-Like and Battery Free Sensor Patches for Wound Monitoring
Authors: Xiaodi Su, Xin Ting Zheng, Laura Sutarlie, Nur Asinah binte Mohamed Salleh, Yong Yu
Abstract:
Wound healing is a dynamic process with multiple phases. Rapid profiling and quantitative characterization of inflammation and infection remain challenging. We have developed paper-like battery-free multiplexed sensors for holistic wound assessment via quantitative detection of multiple inflammation and infection markers. In one of the designs, the sensor patch consists of a wax-printed paper panel with five colorimetric sensor channels arranged in a pattern resembling a five-petaled flower (denoted as a ‘Petal’ sensor). The five sensors are for temperature, pH, trimethylamine, uric acid, and moisture. The sensor patch is sandwiched between a top transparent silicone layer and a bottom adhesive wound contact layer. In the second design, a palm-like-shaped paper strip is fabricated by a paper-cutter printer (denoted as ‘Palm’ sensor). This sensor strip carries five sensor regions connected by a stem sampling entrance that enables rapid colorimetric detection of multiple bacteria metabolites (aldehyde, lactate, moisture, trimethylamine, tryptophan) from wound exudate. For both the “\’ Petal’ and ‘Palm’ sensors, color images can be captured by a mobile phone. According to the color changes, one can quantify the concentration of the biomarkers and then determine wound healing status and identify/quantify bacterial species in infected wounds. The ‘Petal’ and ‘Palm’ sensors are validated with in-situ animal and ex-situ skin wound models, respectively. These sensors have the potential for integration with wound dressing to allow early warning of adverse events without frequent removal of the plasters. Such in-situ and early detection of non-healing condition can trigger immediate clinical intervention to facilitate wound care management.Keywords: wound infection, colorimetric sensor, paper fluidic sensor, wound care
Procedia PDF Downloads 816779 Assessment of Sex Differences in Serum Urea and Creatinine Level in Response to Spinal Cord Injury Using Albino Rat Models
Authors: Waziri B. I., Elkhashab M. M.
Abstract:
Background: One of the most serious consequences of spinal cord injury (SCI) is progressive deterioration of renal function mostly as a result of urine stasis and ascending infection of the paralyzed bladder. This necessitates for investigation of early changes in serum urea and creatinine and associated sex related differences in response to SCI. Methods: A total of 24 adult albino rats weighing above 150g were divided equally into two groups, a control and experimental group (n = 12) each containing an equal number of male and female rats. The experimental group animals were paralyzed by complete transection of spinal cord below T4 level after deep anesthesia with ketamine 75mg/kg. Blood samples were collected from both groups five days post SCI for analysis. Mean values of serum urea (mmol/L) and creatinine (µmol/L) for both groups were compared. P < 0.05 was considered as significant. Results: The results showed significantly higher levels (P < 0.05) of serum urea and creatinine in the male SCI models with mean values of 92.12 ± 0.98 and 2573 ± 70.97 respectively compared with their controls where the mean values for serum urea and creatinine were 6.31 ± 1.48 and 476. 95 ± 4.67 respectively. In the female SCI models, serum urea 13.11 ± 0.81 and creatinine 519.88 ± 31.13 were not significantly different from that of female controls with serum urea and creatinine levels of 11.71 ± 1.43 and 493.69 ± 17.10 respectively (P > 0.05). Conclusion: Spinal cord injury caused a significant increase in serum Urea and Creatinine levels in the male models compared to the females. This indicated that males might have higher risk of renal dysfunction following SCI.Keywords: albino rats, creatinine, spinal cord injury (SCI), urea
Procedia PDF Downloads 1396778 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 1446777 Investigating Knowledge Management in Financial Organisation: Proposing a New Model for Implementing Knowledge Management
Authors: Ziba R. Tehrani, Sanaz Moayer
Abstract:
In the age of the knowledge-based economy, knowledge management has become a key factor in sustainable competitive advantage. Knowledge management is discovering, acquiring, developing, sharing, maintaining, evaluating, and using right knowledge in right time by right person in organization; which is accomplished by creating a right link between human resources, information technology, and appropriate structure, to achieve organisational goals. Studying knowledge management financial institutes shows the knowledge management in banking system is not different from other industries but because of complexity of bank’s environment, the implementation is more difficult. The bank managers found out that implementation of knowledge management will bring many advantages to financial institutes, one of the most important of which is reduction of threat to lose subsequent information of personnel job quit. Also Special attention to internal conditions and environment of the financial institutes and avoidance from copy-making in designing the knowledge management is a critical issue. In this paper, it is tried first to define knowledge management concept and introduce existing models of knowledge management; then some of the most important models which have more similarities with other models will be reviewed. In second step according to bank requirements with focus on knowledge management approach, most major objectives of knowledge management are identified. For gathering data in this stage face to face interview is used. Thirdly these specified objectives are analysed with the response of distribution of questionnaire which is gained through managers and expert staffs of ‘Karafarin Bank’. Finally based on analysed data, some features of exiting models are selected and a new conceptual model will be proposed.Keywords: knowledge management, financial institute, knowledge management model, organisational knowledge
Procedia PDF Downloads 3606776 Cross-Dialect Sentence Transformation: A Comparative Analysis of Language Models for Adapting Sentences to British English
Authors: Shashwat Mookherjee, Shruti Dutta
Abstract:
This study explores linguistic distinctions among American, Indian, and Irish English dialects and assesses various Language Models (LLMs) in their ability to generate British English translations from these dialects. Using cosine similarity analysis, the study measures the linguistic proximity between original British English translations and those produced by LLMs for each dialect. The findings reveal that Indian and Irish English translations maintain notably high similarity scores, suggesting strong linguistic alignment with British English. In contrast, American English exhibits slightly lower similarity, reflecting its distinct linguistic traits. Additionally, the choice of LLM significantly impacts translation quality, with Llama-2-70b consistently demonstrating superior performance. The study underscores the importance of selecting the right model for dialect translation, emphasizing the role of linguistic expertise and contextual understanding in achieving accurate translations.Keywords: cross-dialect translation, language models, linguistic similarity, multilingual NLP
Procedia PDF Downloads 766775 Empirical Model for the Estimation of Global Solar Radiation on Horizontal Surface in Algeria
Authors: Malika Fekih, Abdenour Bourabaa, Rafika Hariti, Mohamed Saighi
Abstract:
In Algeria the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Empirical constants for these models have been estimated and the results obtained have been tested statistically. The results show encouraging agreement between estimated and measured values.Keywords: global solar radiation, empirical model, semi arid areas, climatological parameters
Procedia PDF Downloads 5026774 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors
Authors: Freddy Munzhelele
Abstract:
Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms
Procedia PDF Downloads 616773 Coarse-Grained Molecular Simulations to Estimate Thermophysical Properties of Phase Equilibria
Authors: Hai Hoang, Thanh Xuan Nguyen Thi, Guillaume Galliero
Abstract:
Coarse-Grained (CG) molecular simulations have shown to be an efficient way to estimate thermophysical (static and dynamic) properties of fluids. Several strategies have been developed and reported in the literature for defining CG molecular models. Among them, those based on a top-down strategy (i.e. CG molecular models related to macroscopic observables), despite being heuristic, have increasingly gained attention. This is probably due to its simplicity in implementation and its ability to provide reasonable results for not only simple but also complex systems. Regarding simple Force-Fields associated with these CG molecular models, it has been found that the four parameters Mie chain model is one of the best compromises to describe thermophysical static properties (e.g. phase diagram, saturation pressure). However, parameterization procedures of these Mie-chain GC molecular models given in literature are generally insufficient to simultaneously provide static and dynamic (e.g. viscosity) properties. To deal with such situations, we have extended the corresponding states by using a quantity associated with the liquid viscosity. Results obtained from molecular simulations have shown that our approach is able to yield good estimates for both static and dynamic thermophysical properties for various real non-associating fluids. In addition, we will show that on simple (e.g. phase diagram, saturation pressure) and complex (e.g. thermodynamic response functions, thermodynamic energy potentials) static properties, results of our scheme generally provides improved results compared to existing approaches.Keywords: coarse-grained model, mie potential, molecular simulations, thermophysical properties, phase equilibria
Procedia PDF Downloads 3366772 Learning Traffic Anomalies from Generative Models on Real-Time Observations
Authors: Fotis I. Giasemis, Alexandros Sopasakis
Abstract:
This study focuses on detecting traffic anomalies using generative models applied to real-time observations. By integrating a Graph Neural Network with an attention-based mechanism within the Spatiotemporal Generative Adversarial Network framework, we enhance the capture of both spatial and temporal dependencies in traffic data. Leveraging minute-by-minute observations from cameras distributed across Gothenburg, our approach provides a more detailed and precise anomaly detection system, effectively capturing the complex topology and dynamics of urban traffic networks.Keywords: traffic, anomaly detection, GNN, GAN
Procedia PDF Downloads 86771 Comprehensive Experimental Study to Determine Energy Dissipation of Nappe Flows on Stepped Chutes
Authors: Abdollah Ghasempour, Mohammad Reza Kavianpour, Majid Galoie
Abstract:
This study has investigated the fundamental parameters which have effective role on energy dissipation of nappe flows on stepped chutes in order to estimate an empirical relationship using dimensional analysis. To gain this goal, comprehensive experimental study on some large-scale physical models with various step geometries, slopes, discharges, etc. were carried out. For all models, hydraulic parameters such as velocity, pressure, water depth, flow regime and etc. were measured precisely. The effective parameters, then, could be determined by analysis of experimental data. Finally, a dimensional analysis was done in order to estimate an empirical relationship for evaluation of energy dissipation of nappe flows on stepped chutes. Because of using the large-scale physical models in this study, the empirical relationship is in very good agreement with the experimental results.Keywords: nappe flow, energy dissipation, stepped chute, dimensional analysis
Procedia PDF Downloads 3616770 Percolation Transition in an Agglomeration of Spherical Particles
Authors: Johannes J. Schneider, Mathias S. Weyland, Peter Eggenberger Hotz, William D. Jamieson, Oliver Castell, Alessia Faggian, Rudolf M. Füchslin
Abstract:
Agglomerations of polydisperse systems of spherical particles are created in computer simulations using a simplified stochastic-hydrodynamic model: Particles sink to the bottom of the cylinder, taking into account gravity reduced by the buoyant force, the Stokes friction force, the added mass effect, and random velocity changes. Two types of particles are considered, with one of them being able to create connections to neighboring particles of the same type, thus forming a network within the agglomeration at the bottom of a cylinder. Decreasing the fraction of these particles, a percolation transition occurs. The critical regime is determined by investigating the maximum cluster size and the percolation susceptibility.Keywords: binary system, maximum cluster size, percolation, polydisperse
Procedia PDF Downloads 616769 Model Driven Architecture Methodologies: A Review
Authors: Arslan Murtaza
Abstract:
Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies
Procedia PDF Downloads 4596768 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait
Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh
Abstract:
In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.Keywords: innovative methods in transportation data collection, integrated public transportation system, traffic forecasts, transportation modeling, travel behavior
Procedia PDF Downloads 2226767 Predicting the Uniaxial Strength Distribution of Brittle Materials Based on a Uniaxial Test
Authors: Benjamin Sonnenreich
Abstract:
Brittle fracture failure probability is best described using a stochastic approach which is based on the 'weakest link concept' and the connection between a microstructure and macroscopic fracture scale. A general theoretical and experimental framework is presented to predict the uniaxial strength distribution according to independent uniaxial test data. The framework takes as input the applied stresses, the geometry, the materials, the defect distributions and the relevant random variables from uniaxial test results and gives as output an overall failure probability that can be used to improve the reliability of practical designs. Additionally, the method facilitates comparisons of strength data from several sources, uniaxial tests, and sample geometries.Keywords: brittle fracture, strength distribution, uniaxial, weakest link concept
Procedia PDF Downloads 325