Search results for: model data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 34736

Search results for: model data

33446 A Human Activity Recognition System Based on Sensory Data Related to Object Usage

Authors: M. Abdullah, Al-Wadud

Abstract:

Sensor-based activity recognition systems usually accounts which sensors have been activated to perform an activity. The system then combines the conditional probabilities of those sensors to represent different activities and takes the decision based on that. However, the information about the sensors which are not activated may also be of great help in deciding which activity has been performed. This paper proposes an approach where the sensory data related to both usage and non-usage of objects are utilized to make the classification of activities. Experimental results also show the promising performance of the proposed method.

Keywords: Naïve Bayesian, based classification, activity recognition, sensor data, object-usage model

Procedia PDF Downloads 309
33445 New Analytical Current-Voltage Model for GaN-based Resonant Tunneling Diodes

Authors: Zhuang Guo

Abstract:

In the field of GaN-based resonant tunneling diodes (RTDs) simulations, the traditional Tsu-Esaki formalism failed to predict the values of peak currents and peak voltages in the simulated current-voltage(J-V) characteristics. The main reason is that due to the strong internal polarization fields, two-dimensional electron gas(2DEG) accumulates at emitters, resulting in 2D-2D resonant tunneling currents, which become the dominant parts of the total J-V characteristics. By comparison, based on the 3D-2D resonant tunneling mechanism, the traditional Tsu-Esaki formalism cannot predict the J-V characteristics correctly. To overcome this shortcoming, we develop a new analytical model for the 2D-2D resonant tunneling currents generated in GaN-based RTDs. Compared with Tsu-Esaki formalism, the new model has made the following modifications: Firstly, considering the Heisenberg uncertainty, the new model corrects the expression of the density of states around the 2DEG eigenenergy levels at emitters so that it could predict the half width at half-maximum(HWHM) of resonant tunneling currents; Secondly, taking into account the effect of bias on wave vectors on the collectors, the new model modifies the expression of the transmission coefficients which could help to get the values of peak currents closer to the experiment data compared with Tsu-Esaki formalism. The new analytical model successfully predicts the J-V characteristics of GaN-based RTDs, and it also reveals more detailed mechanisms of resonant tunneling happened in GaN-based RTDs, which helps to design and fabricate high-performance GaN RTDs.

Keywords: GaN-based resonant tunneling diodes, tsu-esaki formalism, 2D-2D resonant tunneling, heisenberg uncertainty

Procedia PDF Downloads 63
33444 Bank Competition: On the Relationship with Revenue Diversification and Funding Strategy from Selected ASEAN Countries

Authors: Oktofa Y. Sudrajad, Didier V. Caillie

Abstract:

Association of Southeast Asian Countries Nations (ASEAN) is moving forward to the next level of regional integration by the initiation of ASEAN Economic Community (AEC) which is already started in 2015, 8 years after its declaration for the creation of AEC in 2007. This commitment imposes financial integration in the region is one of the main agenda which will be achieved until 2025. Therefore, the commitment to financial integration including banking integration will bring new landscape in the competition and business model in this region. This study investigates the effect of competition on bank business model using a sample of 324 banks from seven members of Association of Southeast Asian Nations (ASEAN) countries (Cambodia, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam). We use market power approach and Boone indicator as competition measures, while income diversification and bank funding strategies are employed as bank business model representation. Moreover, we also evaluate bank business model based by grouping the banks based on the main banking characteristics. We use unbalanced bank-specific annual panel data over the period of 2003 – 2015. Our empirical analysis shows that the banking industries in ASEAN countries adapt their business model by increasing non-interest income proportion due to the level of competition increase in the sector.

Keywords: bank business model, banking competition, Boone indicator, market power

Procedia PDF Downloads 210
33443 Thermodynamic Properties of Binary Mixtures of 1, 2-Dichloroethane with Some Polyethers: DISQUAC Calculations Compared with Dortmund UNIFAC Results

Authors: F. Amireche, I. Mokbel, J. Jose, B. F. Belaribi

Abstract:

The experimental vapour-liquid equilibria (VLE) at isothermal conditions and excess molar Gibbs energies GE are carried out for the three binary mixtures: 1, 2- dichloroethane + ethylene glycol dimethyl ether, + diethylene glycol dimethyl ether or + diethylene glycol diethyl ether, at ten temperatures ranging from 273 to 353.15 K. A good static device was employed for these measurements. The VLE data were reduced using the Redlich-Kister equation by taking into consideration the vapour pressure non-ideality in terms of the second molar virial coefficient. The experimental data were compared to the results predicted with the DISQUAC and Dortmund UNIFAC group contribution models for the total pressures P, the excess molar Gibbs energies GE and the excess molar enthalpies HE.

Keywords: Disquac model, Dortmund UNIFAC model, 1, 2- dichloroethane, excess molar Gibbs energies GE, polyethers, VLE

Procedia PDF Downloads 258
33442 A Simplified Distribution for Nonlinear Seas

Authors: M. A. Tayfun, M. A. Alkhalidi

Abstract:

The exact theoretical expression describing the probability distribution of nonlinear sea-surface elevations derived from the second-order narrowband model has a cumbersome form that requires numerical computations, not well-disposed to theoretical or practical applications. Here, the same narrowband model is re-examined to develop a simpler closed-form approximation suitable for theoretical and practical applications. The salient features of the approximate form are explored, and its relative validity is verified with comparisons to other readily available approximations, and oceanic data.

Keywords: ocean waves, probability distributions, second-order nonlinearities, skewness coefficient, wave steepness

Procedia PDF Downloads 420
33441 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 54
33440 Discovering the Dimension of Abstractness: Structure-Based Model that Learns New Categories and Categorizes on Different Levels of Abstraction

Authors: Georgi I. Petkov, Ivan I. Vankov, Yolina A. Petrova

Abstract:

A structure-based model of category learning and categorization at different levels of abstraction is presented. The model compares different structures and expresses their similarity implicitly in the forms of mappings. Based on this similarity, the model can categorize different targets either as members of categories that it already has or creates new categories. The model is novel using two threshold parameters to evaluate the structural correspondence. If the similarity between two structures exceeds the higher threshold, a new sub-ordinate category is created. Vice versa, if the similarity does not exceed the higher threshold but does the lower one, the model creates a new category on higher level of abstraction.

Keywords: analogy-making, categorization, learning of categories, abstraction, hierarchical structure

Procedia PDF Downloads 171
33439 Optimization Method of the Number of Berth at Bus Rapid Transit Stations Based on Passenger Flow Demand

Authors: Wei Kunkun, Cao Wanyang, Xu Yujie, Qiao Yuzhi, Liu Yingning

Abstract:

The reasonable design of bus parking spaces can improve the traffic capacity of the station and reduce traffic congestion. In order to reasonably determine the number of berths at BRT (Bus Rapid Transit) stops, it is based on the actual bus rapid transit station observation data, scheduling data, and passenger flow data. Optimize the number of station berths from the perspective of optimizing the balance of supply and demand at the site. Combined with the classical capacity calculation model, this paper first analyzes the important factors affecting the traffic capacity of BRT stops by using SPSS PRO and MATLAB programming software, namely the distribution of BRT stops and the distribution of BRT stop time. Secondly, the method of calculating the number of the classic human capital management (HCM) model is optimized based on the actual passenger demand of the station, and the method applicable to the actual number of station berths is proposed. Taking Gangding Station of Zhongshan Avenue Bus Rapid Transit Corridor in Guangzhou as an example, based on the calculation method proposed in this paper, the number of berths of sub-station 1, sub-station 2 and sub-station 3 is 2, which reduces the road space of the station by 33.3% compared with the previous berth 3 of each sub-station, and returns to social vehicles. Therefore, under the condition of ensuring the passenger flow demand of BRT stations, the road space of the station is reduced, and the road is returned to social vehicles, the traffic capacity of social vehicles is improved, and the traffic capacity and efficiency of the BRT corridor system are improved as a whole.

Keywords: urban transportation, bus rapid transit station, HCM model, capacity, number of berths

Procedia PDF Downloads 84
33438 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 116
33437 A Mean–Variance–Skewness Portfolio Optimization Model

Authors: Kostas Metaxiotis

Abstract:

Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.

Keywords: evolutionary algorithms, portfolio optimization, skewness, stock selection

Procedia PDF Downloads 183
33436 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 154
33435 Developing a Total Quality Management Model Using Structural Equation Modeling for Indonesian Healthcare Industry

Authors: Jonny, T. Yuri M. Zagloel

Abstract:

This paper is made to present an Indonesian Healthcare model. Currently, there are nine TQM (Total Quality Management) practices in healthcare industry. However, these practices are not integrated yet. Therefore, this paper aims to integrate these practices as a model by using Structural Equation Modeling (SEM). After administering about 210 questionnaires to various stakeholders of this industry, a LISREL program was used to evaluate the model's fitness. The result confirmed that the model is fit because the p-value was about 0.45 or above required 0.05. This has signified that previously mentioned of nine TQM practices are able to be integrated as an Indonesian healthcare model.

Keywords: healthcare, total quality management (TQM), structural equation modeling (SEM), linear structural relations (LISREL)

Procedia PDF Downloads 276
33434 Managing Incomplete PSA Observations in Prostate Cancer Data: Key Strategies and Best Practices for Handling Loss to Follow-Up and Missing Data

Authors: Madiha Liaqat, Rehan Ahmed Khan, Shahid Kamal

Abstract:

Multiple imputation with delta adjustment is a versatile and transparent technique for addressing univariate missing data in the presence of various missing mechanisms. This approach allows for the exploration of sensitivity to the missing-at-random (MAR) assumption. In this review, we outline the delta-adjustment procedure and illustrate its application for assessing the sensitivity to deviations from the MAR assumption. By examining diverse missingness scenarios and conducting sensitivity analyses, we gain valuable insights into the implications of missing data on our analyses, enhancing the reliability of our study's conclusions. In our study, we focused on assessing logPSA, a continuous biomarker in incomplete prostate cancer data, to examine the robustness of conclusions against plausible departures from the MAR assumption. We introduced several approaches for conducting sensitivity analyses, illustrating their application within the pattern mixture model (PMM) under the delta adjustment framework. This proposed approach effectively handles missing data, particularly loss to follow-up.

Keywords: loss to follow-up, incomplete response, multiple imputation, sensitivity analysis, prostate cancer

Procedia PDF Downloads 71
33433 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 127
33432 Biodiversity and Climate Change: Consequences for Norway Spruce Mountain Forests in Slovakia

Authors: Jozef Mindas, Jaroslav Skvarenina, Jana Skvareninova

Abstract:

Study of the effects of climate change on Norway Spruce (Picea abies) forests has mainly focused on the diversity of tree species diversity of tree species as a result of the ability of species to tolerate temperature and moisture changes as well as some effects of disturbance regime changes. The tree species’ diversity changes in spruce forests due to climate change have been analyzed via gap model. Forest gap model is a dynamic model for calculation basic characteristics of individual forest trees. Input ecological data for model calculations have been taken from the permanent research plots located in primeval forests in mountainous regions in Slovakia. The results of regional scenarios of the climatic change for the territory of Slovakia have been used, from which the values are according to the CGCM3.1 (global) model, KNMI and MPI (regional) models. Model results for conditions of the climate change scenarios suggest a shift of the upper forest limit to the region of the present subalpine zone, in supramontane zone. N. spruce representation will decrease at the expense of beech and precious broadleaved species (Acer sp., Sorbus sp., Fraxinus sp.). The most significant tree species diversity changes have been identified for the upper tree line and current belt of dwarf pine (Pinus mugo) occurrence. The results have been also discussed in relation to most important disturbances (wind storms, snow and ice storms) and phenological changes which consequences are little known. Special discussion is focused on biomass production changes in relation to carbon storage diversity in different carbon pools.

Keywords: biodiversity, climate change, Norway spruce forests, gap model

Procedia PDF Downloads 269
33431 The Influences of Nurses’ Satisfaction on the Patient Satisfaction with and Loyalty to Korean University Hospitals

Authors: Sung Hee Ahn, Ju Rang Han

Abstract:

Background: With increasing importance in healthcare organization on patient satisfaction and nurses’ job satisfaction, many studies have been conducted. But no research has been administered how nurses’ satisfaction with healthcare organization influence patient satisfaction and loyalty. Purpose: This study aims to conceptualize nurses‘ satisfaction, patient satisfaction with and patient loyalty to hospitals using a hypothetical linear structural equation model, and to identify the significance of path coefficients and goodness of fit index of the structural equation model as well. Method: A total of 2,079 nurses and 6,776 patients recruited from 5 university hospitals in South Korea participated in this study. The data on nurses, including ward nurses and outpatient nurses, were collected from June 24th to July 12th, at the 204 departments of the 5 hospitals through an on-line survey. The data on the patients, including both inpatients and outpatients, were collected from September 30th to October 24th, 2013 at the 5 hospitals using a structured questionnaire. The variable of nurses’ satisfaction was measured using a scale evaluating internal client satisfaction, which is used in SSM Health Care System in the US. Patient satisfaction with the hospital and nurses and patient loyalty were measured by assessing the patient’s intention to revisit and to recommending the hospital to others using a visual analogue scale. The data were analyzed using SPSS version 21.0 and AMOS version 21.0. Result: The hypothetical model was fairly good in terms of goodness of fit (χ2= 64.897 (df=24, p <. 001), GFI=. 906, AGFI=.823, CFI=.921, NFI=.951, NNFI=.952. RMSEA=.114). The significance of path coefficients includes followings 1)The nurses’ satisfaction has significant influence on the patient satisfaction with nurses. 2)The patient satisfaction with nurses has significant influence on the patient satisfaction with the hospital. 3)The patient satisfaction with the hospital has significant influence on the patients’ revisit intention. 4)The patient satisfaction with the hospital has significant influence on the patients’ intention to the recommendations of the hospital. Conclusion: These results provide several practical implications to hospital administrators, who should incorporate ways of improving nurses' and patients' satisfaction with the hospital into their health care marketing strategies.

Keywords: linear structural equation model, loyalty, nurse, patient satisfaction

Procedia PDF Downloads 429
33430 Analytical Model to Predict the Shear Capacity of Reinforced Concrete Beams Externally Strengthened with CFRP Composites Conditions

Authors: Rajai Al-Rousan

Abstract:

This paper presents a proposed analytical model for predicting the shear strength of reinforced concrete beams strengthened with CFRP composites as external reinforcement. The proposed analytical model can predict the shear contribution of CFRP composites of RC beams with an acceptable coefficient of correlation with the tested results. Based on the comparison of the proposed model with the published well-known models (ACI model, Triantafillou model, and Colotti model), the ACI model had a wider range of 0.16 to 10.08 for the ratio between tested and predicted ultimate shears at failure. Also, an acceptable range of 0.27 to 2.78 for the ratio between tested and predicted ultimate shears by the Triantafillou model. Finally, the best prediction (the ratio between the tested and predicted ones) of the ultimate shear capacity is observed by using Colotti model with a range of 0.20 to 1.78. Thus, the contribution of the CFRP composites as external reinforcement can be predicted with high accuracy by using the proposed analytical model.

Keywords: predicting, shear capacity, reinforced concrete, beams, strengthened, externally, CFRP composites

Procedia PDF Downloads 216
33429 Wind Power Density and Energy Conversion in Al-Adwas Ras-Huwirah Area, Hadhramout, Yemen

Authors: Bawadi M. A., Abbad J. A., Baras E. A.

Abstract:

This study was conducted to assess wind energy resources in the area of Al-Adwas Ras-Huwirah Hadhramout Governorate, Yemen, through using statistical calculations, the Weibull model and SPSS program were used in the monthly and the annual to analyze the wind energy resource; the convergence of wind energy; turbine efficiency in the selected area. Wind speed data was obtained from NASA over a period of ten years (2010-2019) and at heights of 50 m above ground level. Probability distributions derived from wind data and their distribution parameters are determined. The density probability function is fitted to the measured probability distributions on an annual basis. This study also involves locating preliminary sites for wind farms using Geographic Information System (GIS) technology. This further leads to maximizing the output energy from the most suitable wind turbines in the proposed site.

Keywords: wind speed analysis, Yemen wind energy, wind power density, Weibull distribution model

Procedia PDF Downloads 67
33428 Study of a Crude Oil Desalting Plant of the National Iranian South Oil Company in Gachsaran by Using Artificial Neural Networks

Authors: H. Kiani, S. Moradi, B. Soltani Soulgani, S. Mousavian

Abstract:

Desalting/dehydration plants (DDP) are often installed in crude oil production units in order to remove water-soluble salts from an oil stream. In order to optimize this process, desalting unit should be modeled. In this research, artificial neural network is used to model efficiency of desalting unit as a function of input parameter. The result of this research shows that the mentioned model has good agreement with experimental data.

Keywords: desalting unit, crude oil, neural networks, simulation, recovery, separation

Procedia PDF Downloads 423
33427 Effect of Drying on the Concrete Structures

Authors: A. Brahma

Abstract:

The drying of hydraulics materials is unavoidable and conducted to important spontaneous deformations. In this study, we show that it is possible to describe the drying shrinkage of the high-performance concrete by a simple expression. A multiple regression model was developed for the prediction of the drying shrinkage of the high-performance concrete. The assessment of the proposed model has been done by a set of statistical tests. The model developed takes in consideration the main parameters of confection and conservation. There was a very good agreement between drying shrinkage predicted by the multiple regression model and experimental results. The developed model adjusts easily to all hydraulic concrete types.

Keywords: hydraulic concretes, drying, shrinkage, prediction, modeling

Procedia PDF Downloads 350
33426 Design of Visual Repository, Constraint and Process Modeling Tool Based on Eclipse Plug-Ins

Authors: Rushiraj Heshi, Smriti Bhandari

Abstract:

Master Data Management requires creation of Central repository, applying constraints on Repository and designing processes to manage data. Designing of Repository, constraints on repository and business processes is very tedious and time consuming task for large Enterprise. Hence Visual Repository, constraints and Process (Workflow) modeling is the most critical step in Master Data Management.In this paper, we realize a Visual Modeling tool for implementing Repositories, Constraints and Processes based on Eclipse Plugin using GMF/EMF which follows principles of Model Driven Engineering (MDE).

Keywords: EMF, GMF, GEF, repository, constraint, process

Procedia PDF Downloads 477
33425 The Effects of Different Parameters of Wood Floating Debris on Scour Rate Around Bridge Piers

Authors: Muhanad Al-Jubouri

Abstract:

A local scour is the most important of the several scours impacting bridge performance and security. Even though scour is widespread in bridges, especially during flood seasons, the experimental tests could not be applied to many standard highway bridges. A computational fluid dynamics numerical model was used to solve the problem of calculating local scouring and deposition for non-cohesive silt and clear water conditions near single and double cylindrical piers with the effect of floating debris. When FLOW-3D software is employed with the Rang turbulence model, the Nilsson bed-load transfer equation and fine mesh size are considered. The numerical findings of single cylindrical piers correspond pretty well with the physical model's results. Furthermore, after parameter effectiveness investigates the range of outcomes based on predicted user inputs such as the bed-load equation, mesh cell size, and turbulence model, the final numerical predictions are compared to experimental data. When the findings are compared, the error rate for the deepest point of the scour is equivalent to 3.8% for the single pier example.

Keywords: local scouring, non-cohesive, clear water, computational fluid dynamics, turbulence model, bed-load equation, debris

Procedia PDF Downloads 55
33424 Off-Topic Text Detection System Using a Hybrid Model

Authors: Usama Shahid

Abstract:

Be it written documents, news columns, or students' essays, verifying the content can be a time-consuming task. Apart from the spelling and grammar mistakes, the proofreader is also supposed to verify whether the content included in the essay or document is relevant or not. The irrelevant content in any document or essay is referred to as off-topic text and in this paper, we will address the problem of off-topic text detection from a document using machine learning techniques. Our study aims to identify the off-topic content from a document using Echo state network model and we will also compare data with other models. The previous study uses Convolutional Neural Networks and TFIDF to detect off-topic text. We will rearrange the existing datasets and take new classifiers along with new word embeddings and implement them on existing and new datasets in order to compare the results with the previously existing CNN model.

Keywords: off topic, text detection, eco state network, machine learning

Procedia PDF Downloads 66
33423 2D and 3D Unsteady Simulation of the Heat Transfer in the Sample during Heat Treatment by Moving Heat Source

Authors: Zdeněk Veselý, Milan Honner, Jiří Mach

Abstract:

The aim of the performed work is to establish the 2D and 3D model of direct unsteady task of sample heat treatment by moving source employing computer model on the basis of finite element method. The complex boundary condition on heat loaded sample surface is the essential feature of the task. Computer model describes heat treatment of the sample during heat source movement over the sample surface. It is started from the 2D task of sample cross section as a basic model. Possibilities of extension from 2D to 3D task are discussed. The effect of the addition of third model dimension on the temperature distribution in the sample is showed. Comparison of various model parameters on the sample temperatures is observed. Influence of heat source motion on the depth of material heat treatment is shown for several velocities of the movement. Presented computer model is prepared for the utilization in laser treatment of machine parts.

Keywords: computer simulation, unsteady model, heat treatment, complex boundary condition, moving heat source

Procedia PDF Downloads 375
33422 Use of Transportation Networks to Optimize The Profit Dynamics of the Product Distribution

Authors: S. Jayasinghe, R. B. N. Dissanayake

Abstract:

Optimization modelling together with the Network models and Linear Programming techniques is a powerful tool in problem solving and decision making in real world applications. This study developed a mathematical model to optimize the net profit by minimizing the transportation cost. This model focuses the transportation among decentralized production plants to a centralized distribution centre and then the distribution among island wide agencies considering the customer satisfaction as a requirement. This company produces basically 9 types of food items with 82 different varieties and 4 types of non-food items with 34 different varieties. Among 6 production plants, 4 were located near the city of Mawanella and the other 2 were located in Galewala and Anuradhapura cities which are 80 km and 150 km away from Mawanella respectively. The warehouse located in the Mawanella was the main production plant and also the only distribution plant. This plant distributes manufactured products to 39 agencies island-wide. The average values and average amount of the goods for 6 consecutive months from May 2013 to October 2013 were collected and then average demand values were calculated. The following constraints are used as the necessary requirement to satisfy the optimum condition of the model; there was one source, 39 destinations and supply and demand for all the agencies are equal. Using transport cost for a kilometer, total transport cost was calculated. Then the model was formulated using distance and flow of the distribution. Network optimization and linear programming techniques were used to originate the model while excel solver is used in solving. Results showed that company requires total transport cost of Rs. 146, 943, 034.50 to fulfil the customers’ requirement for a month. This is very much less when compared with data without using the model. Model also proved that company can reduce their transportation cost by 6% when distributing to island-wide customers. Company generally satisfies their customers’ requirements by 85%. This satisfaction can be increased up to 97% by using this model. Therefore this model can be used by other similar companies in order to reduce the transportation cost.

Keywords: mathematical model, network optimization, linear programming

Procedia PDF Downloads 329
33421 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 378
33420 Assessing Carbon Stock and Sequestration of Reforestation Species on Old Mining Sites in Morocco Using the DNDC Model

Authors: Nabil Elkhatri, Mohamed Louay Metougui, Ngonidzashe Chirinda

Abstract:

Mining activities have left a legacy of degraded landscapes, prompting urgent efforts for ecological restoration. Reforestation holds promise as a potent tool to rehabilitate these old mining sites, with the potential to sequester carbon and contribute to climate change mitigation. This study focuses on evaluating the carbon stock and sequestration potential of reforestation species in the context of Morocco's mining areas, employing the DeNitrification-DeComposition (DNDC) model. The research is grounded in recognizing the need to connect theoretical models with practical implementation, ensuring that reforestation efforts are informed by accurate and context-specific data. Field data collection encompasses growth patterns, biomass accumulation, and carbon sequestration rates, establishing an empirical foundation for the study's analyses. By integrating the collected data with the DNDC model, the study aims to provide a comprehensive understanding of carbon dynamics within reforested ecosystems on old mining sites. The major findings reveal varying sequestration rates among different reforestation species, indicating the potential for species-specific optimization of reforestation strategies to enhance carbon capture. This research's significance lies in its potential to contribute to sustainable land management practices and climate change mitigation strategies. By quantifying the carbon stock and sequestration potential of reforestation species, the study serves as a valuable resource for policymakers, land managers, and practitioners involved in ecological restoration and carbon management. Ultimately, the study aligns with global objectives to rejuvenate degraded landscapes while addressing pressing climate challenges.

Keywords: carbon stock, carbon sequestration, DNDC model, ecological restoration, mining sites, Morocco, reforestation, sustainable land management.

Procedia PDF Downloads 59
33419 Adaptive Neuro Fuzzy Inference System Model Based on Support Vector Regression for Stock Time Series Forecasting

Authors: Anita Setianingrum, Oki S. Jaya, Zuherman Rustam

Abstract:

Forecasting stock price is a challenging task due to the complex time series of the data. The complexity arises from many variables that affect the stock market. Many time series models have been proposed before, but those previous models still have some problems: 1) put the subjectivity of choosing the technical indicators, and 2) rely upon some assumptions about the variables, so it is limited to be applied to all datasets. Therefore, this paper studied a novel Adaptive Neuro-Fuzzy Inference System (ANFIS) time series model based on Support Vector Regression (SVR) for forecasting the stock market. In order to evaluate the performance of proposed models, stock market transaction data of TAIEX and HIS from January to December 2015 is collected as experimental datasets. As a result, the method has outperformed its counterparts in terms of accuracy.

Keywords: ANFIS, fuzzy time series, stock forecasting, SVR

Procedia PDF Downloads 229
33418 An Interpretable Data-Driven Approach for the Stratification of the Cardiorespiratory Fitness

Authors: D.Mendes, J. Henriques, P. Carvalho, T. Rocha, S. Paredes, R. Cabiddu, R. Trimer, R. Mendes, A. Borghi-Silva, L. Kaminsky, E. Ashley, R. Arena, J. Myers

Abstract:

The continued exploration of clinically relevant predictive models continues to be an important pursuit. Cardiorespiratory fitness (CRF) portends clinical vital information and as such its accurate prediction is of high importance. Therefore, the aim of the current study was to develop a data-driven model, based on computational intelligence techniques and, in particular, clustering approaches, to predict CRF. Two prediction models were implemented and compared: 1) the traditional Wasserman/Hansen Equations; and 2) an interpretable clustering approach. Data used for this analysis were from the 'FRIEND - Fitness Registry and the Importance of Exercise: The National Data Base'; in the present study a subset of 10690 apparently healthy individuals were utilized. The accuracy of the models was performed through the computation of sensitivity, specificity, and geometric mean values. The results show the superiority of the clustering approach in the accurate estimation of CRF (i.e., maximal oxygen consumption).

Keywords: cardiorespiratory fitness, data-driven models, knowledge extraction, machine learning

Procedia PDF Downloads 270
33417 Identification and Control the Yaw Motion Dynamics of Open Frame Underwater Vehicle

Authors: Mirza Mohibulla Baig, Imil Hamda Imran, Tri Bagus Susilo, Sami El Ferik

Abstract:

The paper deals with system identification and control a nonlinear model of semi-autonomous underwater vehicle (UUV). The input-output data is first generated using the experimental values of the model parameters and then this data is used to compute the estimated parameter values. In this study, we use the semi-autonomous UUV LAURS model, which is developed by the Sensors and Actuators Laboratory in University of Sao Paolo. We applied three methods to identify the parameters: integral method, which is a classical least square method, recursive least square, and weighted recursive least square. In this paper, we also apply three different inputs (step input, sine wave input and random input) to each identification method. After the identification stage, we investigate the control performance of yaw motion of nonlinear semi-autonomous Unmanned Underwater Vehicle (UUV) using feedback linearization-based controller. In addition, we compare the performance of the control with an integral and a non-integral part along with state feedback. Finally, disturbance rejection and resilience of the controller is tested. The results demonstrate the ability of the system to recover from such fault.

Keywords: system identification, underwater vehicle, integral method, recursive least square, weighted recursive least square, feedback linearization, integral error

Procedia PDF Downloads 518