Search results for: melodic models
6192 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA
Procedia PDF Downloads 1346191 A Numerical Study on the Influence of CO2 Dilution on Combustion Characteristics of a Turbulent Diffusion Flame
Authors: Yasaman Tohidi, Rouzbeh Riazi, Shidvash Vakilipour, Masoud Mohammadi
Abstract:
The objective of the present study is to numerically investigate the effect of CO2 replacement of N2 in air stream on the flame characteristics of the CH4 turbulent diffusion flame. The Open source Field Operation and Manipulation (OpenFOAM) has been used as the computational tool. In this regard, laminar flamelet and modified k-ε models have been utilized as combustion and turbulence models, respectively. Results reveal that the presence of CO2 in air stream changes the flame shape and maximum flame temperature. Also, CO2 dilution causes an increment in CO mass fraction.Keywords: CH4 diffusion flame, CO2 dilution, OpenFOAM, turbulent flame
Procedia PDF Downloads 2756190 Effect of Soil Corrosion in Failures of Buried Gas Pipelines
Authors: Saima Ali, Pathamanathan Rajeev, Imteaz A. Monzur
Abstract:
In this paper, a brief review of the corrosion mechanism in buried pipe and modes of failure is provided together with the available corrosion models. Moreover, the sensitivity analysis is performed to understand the influence of corrosion model parameters on the remaining life estimation. Further, the probabilistic analysis is performed to propagate the uncertainty in the corrosion model on the estimation of the renaming life of the pipe. Finally, the comparison among the corrosion models on the basis of the remaining life estimation will be provided to improve the renewal plan.Keywords: corrosion, pit depth, sensitivity analysis, exposure period
Procedia PDF Downloads 5296189 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs
Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle
Abstract:
Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.Keywords: meteorological data, Washington D.C., DCNet data, NAM model
Procedia PDF Downloads 2336188 Assessment of Sex Differences in Serum Urea and Creatinine Level in Response to Spinal Cord Injury Using Albino Rat Models
Authors: Waziri B. I., Elkhashab M. M.
Abstract:
Background: One of the most serious consequences of spinal cord injury (SCI) is progressive deterioration of renal function mostly as a result of urine stasis and ascending infection of the paralyzed bladder. This necessitates for investigation of early changes in serum urea and creatinine and associated sex related differences in response to SCI. Methods: A total of 24 adult albino rats weighing above 150g were divided equally into two groups, a control and experimental group (n = 12) each containing an equal number of male and female rats. The experimental group animals were paralyzed by complete transection of spinal cord below T4 level after deep anesthesia with ketamine 75mg/kg. Blood samples were collected from both groups five days post SCI for analysis. Mean values of serum urea (mmol/L) and creatinine (µmol/L) for both groups were compared. P < 0.05 was considered as significant. Results: The results showed significantly higher levels (P < 0.05) of serum urea and creatinine in the male SCI models with mean values of 92.12 ± 0.98 and 2573 ± 70.97 respectively compared with their controls where the mean values for serum urea and creatinine were 6.31 ± 1.48 and 476. 95 ± 4.67 respectively. In the female SCI models, serum urea 13.11 ± 0.81 and creatinine 519.88 ± 31.13 were not significantly different from that of female controls with serum urea and creatinine levels of 11.71 ± 1.43 and 493.69 ± 17.10 respectively (P > 0.05). Conclusion: Spinal cord injury caused a significant increase in serum Urea and Creatinine levels in the male models compared to the females. This indicated that males might have higher risk of renal dysfunction following SCI.Keywords: albino rats, creatinine, spinal cord injury (SCI), urea
Procedia PDF Downloads 1386187 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 1446186 Investigating Knowledge Management in Financial Organisation: Proposing a New Model for Implementing Knowledge Management
Authors: Ziba R. Tehrani, Sanaz Moayer
Abstract:
In the age of the knowledge-based economy, knowledge management has become a key factor in sustainable competitive advantage. Knowledge management is discovering, acquiring, developing, sharing, maintaining, evaluating, and using right knowledge in right time by right person in organization; which is accomplished by creating a right link between human resources, information technology, and appropriate structure, to achieve organisational goals. Studying knowledge management financial institutes shows the knowledge management in banking system is not different from other industries but because of complexity of bank’s environment, the implementation is more difficult. The bank managers found out that implementation of knowledge management will bring many advantages to financial institutes, one of the most important of which is reduction of threat to lose subsequent information of personnel job quit. Also Special attention to internal conditions and environment of the financial institutes and avoidance from copy-making in designing the knowledge management is a critical issue. In this paper, it is tried first to define knowledge management concept and introduce existing models of knowledge management; then some of the most important models which have more similarities with other models will be reviewed. In second step according to bank requirements with focus on knowledge management approach, most major objectives of knowledge management are identified. For gathering data in this stage face to face interview is used. Thirdly these specified objectives are analysed with the response of distribution of questionnaire which is gained through managers and expert staffs of ‘Karafarin Bank’. Finally based on analysed data, some features of exiting models are selected and a new conceptual model will be proposed.Keywords: knowledge management, financial institute, knowledge management model, organisational knowledge
Procedia PDF Downloads 3606185 Cross-Dialect Sentence Transformation: A Comparative Analysis of Language Models for Adapting Sentences to British English
Authors: Shashwat Mookherjee, Shruti Dutta
Abstract:
This study explores linguistic distinctions among American, Indian, and Irish English dialects and assesses various Language Models (LLMs) in their ability to generate British English translations from these dialects. Using cosine similarity analysis, the study measures the linguistic proximity between original British English translations and those produced by LLMs for each dialect. The findings reveal that Indian and Irish English translations maintain notably high similarity scores, suggesting strong linguistic alignment with British English. In contrast, American English exhibits slightly lower similarity, reflecting its distinct linguistic traits. Additionally, the choice of LLM significantly impacts translation quality, with Llama-2-70b consistently demonstrating superior performance. The study underscores the importance of selecting the right model for dialect translation, emphasizing the role of linguistic expertise and contextual understanding in achieving accurate translations.Keywords: cross-dialect translation, language models, linguistic similarity, multilingual NLP
Procedia PDF Downloads 756184 Empirical Model for the Estimation of Global Solar Radiation on Horizontal Surface in Algeria
Authors: Malika Fekih, Abdenour Bourabaa, Rafika Hariti, Mohamed Saighi
Abstract:
In Algeria the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Empirical constants for these models have been estimated and the results obtained have been tested statistically. The results show encouraging agreement between estimated and measured values.Keywords: global solar radiation, empirical model, semi arid areas, climatological parameters
Procedia PDF Downloads 5026183 Coarse-Grained Molecular Simulations to Estimate Thermophysical Properties of Phase Equilibria
Authors: Hai Hoang, Thanh Xuan Nguyen Thi, Guillaume Galliero
Abstract:
Coarse-Grained (CG) molecular simulations have shown to be an efficient way to estimate thermophysical (static and dynamic) properties of fluids. Several strategies have been developed and reported in the literature for defining CG molecular models. Among them, those based on a top-down strategy (i.e. CG molecular models related to macroscopic observables), despite being heuristic, have increasingly gained attention. This is probably due to its simplicity in implementation and its ability to provide reasonable results for not only simple but also complex systems. Regarding simple Force-Fields associated with these CG molecular models, it has been found that the four parameters Mie chain model is one of the best compromises to describe thermophysical static properties (e.g. phase diagram, saturation pressure). However, parameterization procedures of these Mie-chain GC molecular models given in literature are generally insufficient to simultaneously provide static and dynamic (e.g. viscosity) properties. To deal with such situations, we have extended the corresponding states by using a quantity associated with the liquid viscosity. Results obtained from molecular simulations have shown that our approach is able to yield good estimates for both static and dynamic thermophysical properties for various real non-associating fluids. In addition, we will show that on simple (e.g. phase diagram, saturation pressure) and complex (e.g. thermodynamic response functions, thermodynamic energy potentials) static properties, results of our scheme generally provides improved results compared to existing approaches.Keywords: coarse-grained model, mie potential, molecular simulations, thermophysical properties, phase equilibria
Procedia PDF Downloads 3366182 Learning Traffic Anomalies from Generative Models on Real-Time Observations
Authors: Fotis I. Giasemis, Alexandros Sopasakis
Abstract:
This study focuses on detecting traffic anomalies using generative models applied to real-time observations. By integrating a Graph Neural Network with an attention-based mechanism within the Spatiotemporal Generative Adversarial Network framework, we enhance the capture of both spatial and temporal dependencies in traffic data. Leveraging minute-by-minute observations from cameras distributed across Gothenburg, our approach provides a more detailed and precise anomaly detection system, effectively capturing the complex topology and dynamics of urban traffic networks.Keywords: traffic, anomaly detection, GNN, GAN
Procedia PDF Downloads 66181 Comprehensive Experimental Study to Determine Energy Dissipation of Nappe Flows on Stepped Chutes
Authors: Abdollah Ghasempour, Mohammad Reza Kavianpour, Majid Galoie
Abstract:
This study has investigated the fundamental parameters which have effective role on energy dissipation of nappe flows on stepped chutes in order to estimate an empirical relationship using dimensional analysis. To gain this goal, comprehensive experimental study on some large-scale physical models with various step geometries, slopes, discharges, etc. were carried out. For all models, hydraulic parameters such as velocity, pressure, water depth, flow regime and etc. were measured precisely. The effective parameters, then, could be determined by analysis of experimental data. Finally, a dimensional analysis was done in order to estimate an empirical relationship for evaluation of energy dissipation of nappe flows on stepped chutes. Because of using the large-scale physical models in this study, the empirical relationship is in very good agreement with the experimental results.Keywords: nappe flow, energy dissipation, stepped chute, dimensional analysis
Procedia PDF Downloads 3616180 Model Driven Architecture Methodologies: A Review
Authors: Arslan Murtaza
Abstract:
Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies
Procedia PDF Downloads 4586179 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait
Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh
Abstract:
In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.Keywords: innovative methods in transportation data collection, integrated public transportation system, traffic forecasts, transportation modeling, travel behavior
Procedia PDF Downloads 2226178 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments
Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora
Abstract:
Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver
Procedia PDF Downloads 3146177 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 1496176 Relation between Physical and Mechanical Properties of Concrete Paving Stones Using Neuro-Fuzzy Approach
Authors: Erion Luga, Aksel Seitllari, Kemal Pervanqe
Abstract:
This study investigates the relation between physical and mechanical properties of concrete paving stones using neuro-fuzzy approach. For this purpose 200 samples of concrete paving stones were selected randomly from different sources. The first phase included the determination of physical properties of the samples such as water absorption capacity, porosity and unit weight. After that the indirect tensile strength test and compressive strength test of the samples were performed. İn the second phase, adaptive neuro-fuzzy approach was employed to simulate nonlinear mapping between the above mentioned physical properties and mechanical properties of paving stones. The neuro-fuzzy models uses Sugeno type fuzzy inference system. The models parameters were adapted using hybrid learning algorithm and input space was fuzzyfied by considering grid partitioning. It is concluded based on the observed data and the estimated data through ANFIS models that neuro-fuzzy system exhibits a satisfactory performance.Keywords: paving stones, physical properties, mechanical properties, ANFIS
Procedia PDF Downloads 3416175 Cloud Computing: Major Issues and Solutions
Authors: S. Adhirai Subramaniyam, Paramjit Singh
Abstract:
This paper presents major issues in cloud computing. The paper describes different cloud computing deployment models and cloud service models available in the field of cloud computing. The paper then concentrates on various issues in the field. The issues such as cloud compatibility, compliance of the cloud, standardizing cloud technology, monitoring while on the cloud and cloud security are described. The paper suggests solutions for these issues and concludes that hybrid cloud infrastructure is a real boon for organizations.Keywords: cloud, cloud computing, mobile cloud computing, private cloud, public cloud, hybrid cloud, SAAS, PAAS, IAAS, cloud security
Procedia PDF Downloads 3436174 Models Comparison for Solar Radiation
Authors: Djelloul Benatiallah
Abstract:
Due to the current high consumption and recent industry growth, the depletion of fossil and natural energy supplies like oil, gas, and uranium is declining. Due to pollution and climate change, there needs to be a swift switch to renewable energy sources. Research on renewable energy is being done to meet energy needs. Solar energy is one of the renewable resources that can currently meet all of the world's energy needs. In most parts of the world, solar energy is a free and unlimited resource that can be used in a variety of ways, including photovoltaic systems for the generation of electricity and thermal systems for the generation of heatfor the residential sector's production of hot water. In this article, we'll conduct a comparison. The first step entails identifying the two empirical models that will enable us to estimate the daily irradiations on a horizontal plane. On the other hand, we compare it using the data obtained from measurements made at the Adrar site over the four distinct seasons. The model 2 provides a better estimate of the global solar components, with an absolute mean error of less than 7% and a correlation coefficient of more than 0.95, as well as a relative coefficient of the bias error that is less than 6% in absolute value and a relative RMSE that is less than 10%, according to a comparison of the results obtained by simulating the two models.Keywords: solar radiation, renewable energy, fossil, photovoltaic systems
Procedia PDF Downloads 796173 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach
Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani
Abstract:
This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.Keywords: chaotic approach, phase space, Cao method, local linear approximation method
Procedia PDF Downloads 3316172 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.Keywords: data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks
Procedia PDF Downloads 2216171 Switched System Diagnosis Based on Intelligent State Filtering with Unknown Models
Authors: Nada Slimane, Foued Theljani, Faouzi Bouani
Abstract:
The paper addresses the problem of fault diagnosis for systems operating in several modes (normal or faulty) based on states assessment. We use, for this purpose, a methodology consisting of three main processes: 1) sequential data clustering, 2) linear model regression and 3) state filtering. Typically, Kalman Filter (KF) is an algorithm that provides estimation of unknown states using a sequence of I/O measurements. Inevitably, although it is an efficient technique for state estimation, it presents two main weaknesses. First, it merely predicts states without being able to isolate/classify them according to their different operating modes, whether normal or faulty modes. To deal with this dilemma, the KF is endowed with an extra clustering step based fully on sequential version of the k-means algorithm. Second, to provide state estimation, KF requires state space models, which can be unknown. A linear regularized regression is used to identify the required models. To prove its effectiveness, the proposed approach is assessed on a simulated benchmark.Keywords: clustering, diagnosis, Kalman Filtering, k-means, regularized regression
Procedia PDF Downloads 1826170 Application Methodology for the Generation of 3D Thermal Models Using UAV Photogrammety and Dual Sensors for Mining/Industrial Facilities Inspection
Authors: Javier Sedano-Cibrián, Julio Manuel de Luis-Ruiz, Rubén Pérez-Álvarez, Raúl Pereda-García, Beatriz Malagón-Picón
Abstract:
Structural inspection activities are necessary to ensure the correct functioning of infrastructures. Unmanned Aerial Vehicle (UAV) techniques have become more popular than traditional techniques. Specifically, UAV Photogrammetry allows time and cost savings. The development of this technology has permitted the use of low-cost thermal sensors in UAVs. The representation of 3D thermal models with this type of equipment is in continuous evolution. The direct processing of thermal images usually leads to errors and inaccurate results. A methodology is proposed for the generation of 3D thermal models using dual sensors, which involves the application of visible Red-Blue-Green (RGB) and thermal images in parallel. Hence, the RGB images are used as the basis for the generation of the model geometry, and the thermal images are the source of the surface temperature information that is projected onto the model. Mining/industrial facilities representations that are obtained can be used for inspection activities.Keywords: aerial thermography, data processing, drone, low-cost, point cloud
Procedia PDF Downloads 1436169 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting
Authors: Yiannis G. Smirlis
Abstract:
The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.Keywords: data envelopment analysis, interval DEA, efficiency classification, efficiency prediction
Procedia PDF Downloads 1646168 Optimizing Machine Learning Through Python Based Image Processing Techniques
Authors: Srinidhi. A, Naveed Ahmed, Twinkle Hareendran, Vriksha Prakash
Abstract:
This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models.Keywords: image processing, machine learning applications, template matching, emotion detection
Procedia PDF Downloads 136167 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 1206166 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 756165 Bayesian Flexibility Modelling of the Conditional Autoregressive Prior in a Disease Mapping Model
Authors: Davies Obaromi, Qin Yongsong, James Ndege, Azeez Adeboye, Akinwumi Odeyemi
Abstract:
The basic model usually used in disease mapping, is the Besag, York and Mollie (BYM) model and which combines the spatially structured and spatially unstructured priors as random effects. Bayesian Conditional Autoregressive (CAR) model is a disease mapping method that is commonly used for smoothening the relative risk of any disease as used in the Besag, York and Mollie (BYM) model. This model (CAR), which is also usually assigned as a prior to one of the spatial random effects in the BYM model, successfully uses information from adjacent sites to improve estimates for individual sites. To our knowledge, there are some unrealistic or counter-intuitive consequences on the posterior covariance matrix of the CAR prior for the spatial random effects. In the conventional BYM (Besag, York and Mollie) model, the spatially structured and the unstructured random components cannot be seen independently, and which challenges the prior definitions for the hyperparameters of the two random effects. Therefore, the main objective of this study is to construct and utilize an extended Bayesian spatial CAR model for studying tuberculosis patterns in the Eastern Cape Province of South Africa, and then compare for flexibility with some existing CAR models. The results of the study revealed the flexibility and robustness of this alternative extended CAR to the commonly used CAR models by comparison, using the deviance information criteria. The extended Bayesian spatial CAR model is proved to be a useful and robust tool for disease modeling and as a prior for the structured spatial random effects because of the inclusion of an extra hyperparameter.Keywords: Besag2, CAR models, disease mapping, INLA, spatial models
Procedia PDF Downloads 2796164 3D Simulation of Orthodontic Tooth Movement in the Presence of Horizontal Bone Loss
Authors: Azin Zargham, Gholamreza Rouhi, Allahyar Geramy
Abstract:
One of the most prevalent types of alveolar bone loss is horizontal bone loss (HBL) in which the bone height around teeth is reduced homogenously. In the presence of HBL the magnitudes of forces during orthodontic treatment should be altered according to the degree of HBL, in a way that without further bone loss, desired tooth movement can be obtained. In order to investigate the appropriate orthodontic force system in the presence of HBL, a three-dimensional numerical model capable of the simulation of orthodontic tooth movement was developed. The main goal of this research was to evaluate the effect of different degrees of HBL on a long-term orthodontic tooth movement. Moreover, the effect of different force magnitudes on orthodontic tooth movement in the presence of HBL was studied. Five three-dimensional finite element models of a maxillary lateral incisor with 0 mm, 1.5 mm, 3 mm, 4.5 mm and 6 mm of HBL were constructed. The long-term orthodontic tooth tipping movements were attained during a 4-weeks period in an iterative process through the external remodeling of the alveolar bone based on strains in periodontal ligament as the bone remodeling mechanical stimulus. To obtain long-term orthodontic tooth movement in each iteration, first the strains in periodontal ligament under a 1-N tipping force were calculated using finite element analysis. Then, bone remodeling and the subsequent tooth movement were computed in a post-processing software using a custom written program. Incisal edge, cervical, and apical area displacement in the models with different alveolar bone heights (0, 1.5, 3, 4.5, 6 mm bone loss) in response to a 1-N tipping force were calculated. Maximum tooth displacement was found to be 2.65 mm at the top of the crown of the model with a 6 mm bone loss. Minimum tooth displacement was 0.45 mm at the cervical level of the model with a normal bone support. Tooth tipping degrees of models in response to different tipping force magnitudes were also calculated for models with different degrees of HBL. Degrees of tipping tooth movement increased as force level was increased. This increase was more prominent in the models with smaller degrees of HBL. By using finite element method and bone remodeling theories, this study indicated that in the presence of HBL, under the same load, long-term orthodontic tooth movement will increase. The simulation also revealed that even though tooth movement increases with increasing the force, this increase was only prominent in the models with smaller degrees of HBL, and tooth models with greater degrees of HBL will be less affected by the magnitude of an orthodontic force. Based on our results, the applied force magnitude must be reduced in proportion of degree of HBL.Keywords: bone remodeling, finite element method, horizontal bone loss, orthodontic tooth movement.
Procedia PDF Downloads 3426163 Testing for Endogeneity of Foreign Direct Investment: Implications for Economic Policy
Authors: Liwiusz Wojciechowski
Abstract:
Research background: The current knowledge does not give a clear answer to the question of the impact of FDI on productivity. Results of the empirical studies are still inconclusive, no matter how extensive and diverse in terms of research approaches or groups of countries analyzed they are. It should also take into account the possibility that FDI and productivity are linked and that there is a bidirectional relationship between them. This issue is particularly important because on one hand FDI can contribute to changes in productivity in the host country, but on the other hand its level and dynamics may imply that FDI should be undertaken in a given country. As already mentioned, a two-way relationship between the presence of foreign capital and productivity in the host country should be assumed, taking into consideration the endogenous nature of FDI. Purpose of the article: The overall objective of this study is to determine the causality between foreign direct investment and total factor productivity in host county in terms of different relative absorptive capacity across countries. In the classic sense causality among variables is not always obvious and requires for testing, which would facilitate proper specification of FDI models. The aim of this article is to study endogeneity of selected macroeconomic variables commonly being used in FDI models in case of Visegrad countries: main recipients of FDI in CEE. The findings may be helpful in determining the structure of the actual relationship between variables, in appropriate models estimation and in forecasting as well as economic policymaking. Methodology/methods: Panel and time-series data techniques including GMM estimator, VEC models and causality tests were utilized in this study. Findings & Value added: The obtained results allow to confirm the hypothesis states the bi-directional causality between FDI and total factor productivity. Although results differ from among countries and data level of aggregation implications may be useful for policymakers in case of providing foreign capital attracting policy.Keywords: endogeneity, foreign direct investment, multi-equation models, total factor productivity
Procedia PDF Downloads 197