Search results for: computational accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5482

Search results for: computational accuracy

4162 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 146
4161 Creep Analysis and Rupture Evaluation of High Temperature Materials

Authors: Yuexi Xiong, Jingwu He

Abstract:

The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.

Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines

Procedia PDF Downloads 291
4160 Numerical Simulation of a Point Absorber Wave Energy Converter Using OpenFOAM in Indian Scenario

Authors: Pooja Verma, Sumana Ghosh

Abstract:

There is a growing need for alternative way of power generation worldwide. The reason can be attributed to limited resources of fossil fuels, environmental pollution, increasing cost of conventional fuels, and lower efficiency of conversion of energy in existing systems. In this context, one of the potential alternatives for power generation is wave energy. However, it is difficult to estimate the amount of electrical energy generation in an irregular sea condition by experiment and or analytical methods. Therefore in this work, a numerical wave tank is developed using the computational fluid dynamics software Open FOAM. In this software a specific utility known as waves2Foam utility is being used to carry out the simulation work. The computational domain is a tank of dimension: 5m*1.5m*1m with a floating object of dimension: 0.5m*0.2m*0.2m. Regular waves are generated at the inlet of the wave tank according to Stokes second order theory. The main objective of the present study is to validate the numerical model against existing experimental data. It shows a good matching with the existing experimental data of floater displacement. Later the model is exploited to estimate energy extraction due to the movement of such a point absorber in real sea conditions. Scale down the wave properties like wave height, wave length, etc. are used as input parameters. Seasonal variations are also considered.

Keywords: OpenFOAM, numerical wave tank, regular waves, floating object, point absorber

Procedia PDF Downloads 353
4159 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 135
4158 Computational Fluid Dynamics Simulation of Reservoir for Dwell Time Prediction

Authors: Nitin Dewangan, Nitin Kattula, Megha Anawat

Abstract:

Hydraulic reservoir is the key component in the mobile construction vehicles; most of the off-road earth moving construction machinery requires bigger side hydraulic reservoirs. Their reservoir construction is very much non-uniform and designers used such design to utilize the space available under the vehicle. There is no way to find out the space utilization of the reservoir by oil and validity of design except virtual simulation. Computational fluid dynamics (CFD) helps to predict the reservoir space utilization by vortex mapping, path line plots and dwell time prediction to make sure the design is valid and efficient for the vehicle. The dwell time acceptance criteria for effective reservoir design is 15 seconds. The paper will describe the hydraulic reservoir simulation which is carried out using CFD tool acuSolve using automated mesh strategy. The free surface flow and moving reference mesh is used to define the oil flow level inside the reservoir. The first baseline design is not able to meet the acceptance criteria, i.e., dwell time below 15 seconds because the oil entry and exit ports were very close. CFD is used to redefine the port locations for the reservoir so that oil dwell time increases in the reservoir. CFD also proposed baffle design the effective space utilization. The final design proposed through CFD analysis is used for physical validation on the machine.

Keywords: reservoir, turbulence model, transient model, level set, free-surface flow, moving frame of reference

Procedia PDF Downloads 153
4157 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field

Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi

Abstract:

Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.

Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing

Procedia PDF Downloads 197
4156 Influence of Alcohol Consumption on Attention in Wistar Albino Rats

Authors: Adekunle Adesina, Dorcas Adesina

Abstract:

This Research investigated the influence of alcohol consumption on attention in Wister albino rats. It was designed to test whether or not alcohol consumption affected visual and auditory attention. The sample of this study comprise of 3males albino rats and 3 females albino rats which were randomly assigned to 3 (male/female each) groups, 1, 2 and 3. The first group which was experimental Group 1 received 4ml of alcohol ingestion with cannula twice daily (morning and evening). The second group which was experimental group 2 received 2ml of alcohol ingestion with cannula twice daily (morning and evening). Third group which was the control group only received water (placebo), all these happened within a period of 2 days. Three hypotheses were advanced and testedf in the study. Hypothesis 1 stated that there will be no significant difference between the response speed of albino rats that consume alcohol and those that consume water on visual attention using 5-CSRTT. This was confirmed (DF (2, 9) = 0.72, P <.05). Hypothesis 2 stated that albino rats who consumed alcohol will perform better than those who consume water on auditory accuracy using 5-CSRTT. This was also tested but not confirmed (DF (2, 9) = 2.10, P< .05). The third hypothesis which stated that female albino rats who consumed alcohol would not perform better than male albino rats who consumed alcohol on auditory accuracy using 5-CSRTT was tested and not confirmed. (DF (4) = 0.17, P < .05). Data was analyzed using one-way ANOVA and T-test for independent measures. It was therefore recommended that government policies and programs should be directed at reducing to the barest minimum the rate of alcohol consumption especially among males as it is detrimental to the human auditory attentional organ.

Keywords: alcohol, attention, influence, rats, Wistar

Procedia PDF Downloads 267
4155 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 81
4154 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix

Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung

Abstract:

Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.

Keywords: medical technology, artificial intelligence, radiology, lung cancer

Procedia PDF Downloads 72
4153 Effect of Knowledge of Bubble Point Pressure on Estimating PVT Properties from Correlations

Authors: Ahmed El-Banbi, Ahmed El-Maraghi

Abstract:

PVT properties are needed as input data in all reservoir, production, and surface facilities engineering calculations. In the absence of PVT reports on valid reservoir fluid samples, engineers rely on PVT correlations to generate the required PVT data. The accuracy of PVT correlations varies, and no correlation group has been found to provide accurate results for all oil types. The effect of inaccurate PVT data can be significant in engineering calculations and is well documented in the literature. Bubble point pressure can sometimes be obtained from external sources. In this paper, we show how to utilize the known bubble point pressure to improve the accuracy of calculated PVT properties from correlations. We conducted a systematic study using around 250 reservoir oil samples to quantify the effect of pre-knowledge of bubble point pressure. The samples spanned a wide range of oils, from very volatile oils to black oils and all the way to low-GOR oils. A method for shifting both undersaturated and saturated sections of the PVT properties curves to the correct bubble point is explained. Seven PVT correlation families were used in this study. All PVT properties (e.g., solution gas-oil ratio, formation volume factor, density, viscosity, and compressibility) were calculated using the correct bubble point pressure and the correlation estimated bubble point pressure. Comparisons between the calculated PVT properties and actual laboratory-measured values were made. It was found that pre-knowledge of bubble point pressure and using the shifting technique presented in the paper improved the correlation-estimated values by 10% to more than 30%. The most improvement was seen in the solution gas-oil ratio and formation volume factor.

Keywords: PVT data, PVT properties, PVT correlations, bubble point pressure

Procedia PDF Downloads 63
4152 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis

Procedia PDF Downloads 326
4151 Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation

Authors: Soha A. Bahanshal, Vaibhav Verdhan, Bayong Kim

Abstract:

Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time.

Keywords: prediction, decision trees, machine learning, support vector machine, ensemble model, student graduation, GOT graduate on time

Procedia PDF Downloads 73
4150 Path-Tracking Controller for Tracked Mobile Robot on Rough Terrain

Authors: Toshifumi Hiramatsu, Satoshi Morita, Manuel Pencelli, Marta Niccolini, Matteo Ragaglia, Alfredo Argiolas

Abstract:

Automation technologies for agriculture field are needed to promote labor-saving. One of the most relevant problems in automated agriculture is represented by controlling the robot along a predetermined path in presence of rough terrain or incline ground. Unfortunately, disturbances originating from interaction with the ground, such as slipping, make it quite difficult to achieve the required accuracy. In general, it is required to move within 5-10 cm accuracy with respect to the predetermined path. Moreover, lateral velocity caused by gravity on the incline field also affects slipping. In this paper, a path-tracking controller for tracked mobile robots moving on rough terrains of incline field such as vineyard is presented. The controller is composed of a disturbance observer and an adaptive controller based on the kinematic model of the robot. The disturbance observer measures the difference between the measured and the reference yaw rate and linear velocity in order to estimate slip. Then, the adaptive controller adapts “virtual” parameter of the kinematics model: Instantaneous Centers of Rotation (ICRs). Finally, target angular velocity reference is computed according to the adapted parameter. This solution allows estimating the effects of slip without making the model too complex. Finally, the effectiveness of the proposed solution is tested in a simulation environment.

Keywords: the agricultural robot, autonomous control, path-tracking control, tracked mobile robot

Procedia PDF Downloads 172
4149 Efficient Wind Fragility Analysis of Concrete Chimney under Stochastic Extreme Wind Incorporating Temperature Effects

Authors: Soumya Bhattacharjya, Avinandan Sahoo, Gaurav Datta

Abstract:

Wind fragility analysis of chimney is often carried out disregarding temperature effect. However, the combined effect of wind and temperature is the most critical limit state for chimney design. Hence, in the present paper, an efficient fragility analysis for concrete chimney is explored under combined wind and temperature effect. Wind time histories are generated by Davenports Power Spectral Density Function and using Weighed Amplitude Wave Superposition Technique. Fragility analysis is often carried out in full Monte Carlo Simulation framework, which requires extensive computational time. Thus, in the present paper, an efficient adaptive metamodelling technique is adopted to judiciously approximate limit state function, which will be subsequently used in the simulation framework. This will save substantial computational time and make the approach computationally efficient. Uncertainty in wind speed, wind load related parameters, and resistance-related parameters is considered. The results by the full simulation approach, conventional metamodelling approach and proposed adaptive metamodelling approach will be compared. Effect of disregarding temperature in wind fragility analysis will be highlighted.

Keywords: adaptive metamodelling technique, concrete chimney, fragility analysis, stochastic extreme wind load, temperature effect

Procedia PDF Downloads 215
4148 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana

Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet

Abstract:

The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.

Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems

Procedia PDF Downloads 87
4147 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 191
4146 Experimental and CFD Simulation of the Jet Pump for Air Bubbles Formation

Authors: L. Grinis, N. Lubashevsky, Y. Ostrovski

Abstract:

A jet pump is a type of pump that accelerates the flow of a secondary fluid (driven fluid) by introducing a motive fluid with high velocity into a converging-diverging nozzle. Jet pumps are also known as adductors or ejectors depending on the motivator phase. The ejector's motivator is of a gaseous nature, usually steam or air, while the educator's motivator is a liquid, usually water. Jet pumps are devices that use air bubbles and are widely used in wastewater treatment processes. In this work, we will discuss about the characteristics of the jet pump and the computational simulation of this device. To find the optimal angle and depth for the air pipe, so as to achieve the maximal air volumetric flow rate, an experimental apparatus was constructed to ascertain the best geometrical configuration for this new type of jet pump. By using 3D printing technology, a series of jet pumps was printed and tested whilst aspiring to maximize air flow rate dependent on angle and depth of the air pipe insertion. The experimental results show a major difference of up to 300% in performance between the different pumps (ratio of air flow rate to supplied power) where the optimal geometric model has an insertion angle of 600 and air pipe insertion depth ending at the center of the mixing chamber. The differences between the pumps were further explained by using CFD for better understanding the reasons that affect the airflow rate. The validity of the computational simulation and the corresponding assumptions have been proved experimentally. The present research showed high degree of congruence with the results of the laboratory tests. This study demonstrates the potential of using of the jet pump in many practical applications.

Keywords: air bubbles, CFD simulation, jet pump, applications

Procedia PDF Downloads 243
4145 Multifluid Computational Fluid Dynamics Simulation for Sawdust Gasification inside an Industrial Scale Fluidized Bed Gasifier

Authors: Vasujeet Singh, Pruthiviraj Nemalipuri, Vivek Vitankar, Harish Chandra Das

Abstract:

For the correct prediction of thermal and hydraulic performance (bed voidage, suspension density, pressure drop, heat transfer, and combustion kinetics), one should incorporate the correct parameters in the computational fluid dynamics simulation of a fluidized bed gasifier. Scarcity of fossil fuels, and to fulfill the energy demand of the increasing population, researchers need to shift their attention to the alternative to fossil fuels. The current research work focuses on hydrodynamics behavior and gasification of sawdust inside a 2D industrial scale FBG using the Eulerian-Eulerian multifluid model. The present numerical model is validated with experimental data. Further, this model extended for the prediction of gasification characteristics of sawdust by incorporating eight heterogeneous moisture release, volatile cracking, tar cracking, tar oxidation, char combustion, CO₂ gasification, steam gasification, methanation reaction, and five homogeneous oxidation of CO, CH₄, H₂, forward and backward water gas shift (WGS) reactions. In the result section, composition of gasification products is analyzed, along with the hydrodynamics of sawdust and sand phase, heat transfer between the gas, sand and sawdust, reaction rates of different homogeneous and heterogeneous reactions is being analyzed along the height of the domain.

Keywords: devolatilization, Eulerian-Eulerian, fluidized bed gasifier, mathematical modelling, sawdust gasification

Procedia PDF Downloads 107
4144 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 340
4143 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 188
4142 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 276
4141 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 121
4140 Hourly Solar Radiations Predictions for Anticipatory Control of Electrically Heated Floor: Use of Online Weather Conditions Forecast

Authors: Helene Thieblemont, Fariborz Haghighat

Abstract:

Energy storage systems play a crucial role in decreasing building energy consumption during peak periods and expand the use of renewable energies in buildings. To provide a high building thermal performance, the energy storage system has to be properly controlled to insure a good energy performance while maintaining a satisfactory thermal comfort for building’s occupant. In the case of passive discharge storages, defining in advance the required amount of energy is required to avoid overheating in the building. Consequently, anticipatory supervisory control strategies have been developed forecasting future energy demand and production to coordinate systems. Anticipatory supervisory control strategies are based on some predictions, mainly of the weather forecast. However, if the forecasted hourly outdoor temperature may be found online with a high accuracy, solar radiations predictions are most of the time not available online. To estimate them, this paper proposes an advanced approach based on the forecast of weather conditions. Several methods to correlate hourly weather conditions forecast to real hourly solar radiations are compared. Results show that using weather conditions forecast allows estimating with an acceptable accuracy solar radiations of the next day. Moreover, this technique allows obtaining hourly data that may be used for building models. As a result, this solar radiation prediction model may help to implement model-based controller as Model Predictive Control.

Keywords: anticipatory control, model predictive control, solar radiation forecast, thermal storage

Procedia PDF Downloads 271
4139 Creation of a Realistic Railway Simulator Developed on a 3D Graphic Game Engine Using a Numerical Computing Programming Environment

Authors: Kshitij Ansingkar, Yohei Hoshino, Liangliang Yang

Abstract:

Advances in algorithms related to autonomous systems have made it possible to research on improving the accuracy of a train’s location. This has the capability of increasing the throughput of a railway network without the need for the creation of additional infrastructure. To develop such a system, the railway industry requires data to test sensor fusion theories or implement simultaneous localization and mapping (SLAM) algorithms. Though such simulation data and ground truth datasets are available for testing automation algorithms of vehicles, however, due to regulations and economic considerations, there is a dearth of such datasets in the railway industry. Thus, there is a need for the creation of a simulation environment that can generate realistic synthetic datasets. This paper proposes (1) to leverage the capabilities of open-source 3D graphic rendering software to create a visualization of the environment. (2) to utilize open-source 3D geospatial data for accurate visualization and (3) to integrate the graphic rendering software with a programming language and numerical computing platform. To develop such an integrated platform, this paper utilizes the computing platform’s advanced sensor models like LIDAR, camera, IMU or GPS and merges it with the 3D rendering of the game engine to generate high-quality synthetic data. Further, these datasets can be used to train Railway models and improve the accuracy of a train’s location.

Keywords: 3D game engine, 3D geospatial data, dataset generation, railway simulator, sensor fusion, SLAM

Procedia PDF Downloads 8
4138 Characteristing Aquifer Layers of Karstic Springs in Nahavand Plain Using Geoelectrical and Electromagnetic Methods

Authors: A. Taheri Tizro, Rojin Fasihi

Abstract:

Geoelectrical method is one of the most effective tools in determining subsurface lithological layers. The electromagnetic method is also a newer method that can play an important role in determining and separating subsurface layers with acceptable accuracy. In the present research, 10 electromagnetic soundings were collected in the upstream of 5 karstic springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood in Nahavand plain of Hamadan province. By using the emerging data, the belectromagnetic logs were prepared at different depths and compared with 5 logs of the geoelectric method. The comparison showed that the value of NRMSE in the geoelectric method for the 5 springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood were 7.11, 7.50, respectively. It is 44.93, 3.99, and 2.99, and in the electromagnetic method, the value of this coefficient for the investigated springs is about 1.4, 1.1, 1.2, 1.5, and 1.3, respectively. In addition to the similarity of the results of the two methods, it is found that, the accuracy of the electromagnetic method based on the NRMSE value is higher than the geoelectric method. The advantage of the electromagnetic method compared to geoelectric is on less time consuming and its cost prohibitive. The depth to water table is the final result of this research work , which showed that in the springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood, having depth of about 6, 20, 10, 2 36 meters respectively. The maximum thickness of the aquifer layer was estimated in Gonbad kabood spring (36 meters) and the lowest in Gian spring (2 meters). These results can be used to identify the water potential of the region in order to better manage water resources.

Keywords: karst spring, geoelectric, aquifer layers, nahavand

Procedia PDF Downloads 72
4137 The Rational Design of Original Anticancer Agents Using Computational Approach

Authors: Majid Farsadrooh, Mehran Feizi-Dehnayebi

Abstract:

Serum albumin is the most abundant protein that is present in the circulatory system of a wide variety of organisms. Although it is a significant macromolecule, it can contribute to osmotic blood pressure and also, plays a superior role in drug disposition and efficiency. Molecular docking simulation can improve in silico drug design and discovery procedures to propound a lead compound and develop it from the discovery step to the clinic. In this study, the molecular docking simulation was applied to select a lead molecule through an investigation of the interaction of the two anticancer drugs (Alitretinoin and Abemaciclib) with Human Serum Albumin (HSA). Then, a series of new compounds (a-e) were suggested using lead molecule modification. Density functional theory (DFT) including MEP map and HOMO-LUMO analysis were used for the newly proposed compounds to predict the reactivity zones on the molecules, stability, and chemical reactivity. DFT calculation illustrated that these new compounds were stable. The estimated binding free energy (ΔG) values for a-e compounds were obtained as -5.78, -5.81, -5.95, -5,98, and -6.11 kcal/mol, respectively. Finally, the pharmaceutical properties and toxicity of these new compounds were estimated through OSIRIS DataWarrior software. The results indicated no risk of tumorigenic, irritant, or reproductive effects and mutagenicity for compounds d and e. As a result, compounds d and e, could be selected for further study as potential therapeutic candidates. Moreover, employing molecular docking simulation with the prediction of pharmaceutical properties helps to discover new potential drug compounds.

Keywords: drug design, anticancer, computational studies, DFT analysis

Procedia PDF Downloads 78
4136 Characterization of Femur Development in Mice: A Computational Approach

Authors: Moncayo Donoso Miguelangel, Guevara Morales Johana, Kalenia Flores Kalenia, Barrera Avellaneda Luis Alejandro, Garzon Alvarado Diego Alexander

Abstract:

In mammals, long bones are formed by ossification of a cartilage mold during early embryonic development, forming structures called secondary ossification centers (SOCs), a primary ossification center (POC) and growth plates. This last structure is responsible for long bone growth. During the femur growth, the morphology of the growth plate and the SOCs may vary during different developmental stages. So far there are no detailed morphological studies of the development process from embryonic to adult stages. In this work, we carried out a morphological characterization of femur development from embryonic period to adulthood in mice. 15, 17 and 19 days old embryos and 1, 7, 14, 35, 46 and 52 days old mice were used. Samples were analyzed by a computational approach, using 3D images obtained by micro-CT imaging. Results obtained in this study showed that femur, its growth plates and SOCs undergo morphological changes during different stages of development, including changes in shape, position and thickness. These variations may be related with a response to mechanical loads imposed for muscle development surrounding the femur and a high activity during early stages necessary to support the high growth rates during first weeks and years of development. This study is important to improve our knowledge about the ossification patterns on every stage of bone development and characterize the morphological changes of important structures in bone growth like SOCs and growth plates.

Keywords: development, femur, growth plate, mice

Procedia PDF Downloads 348
4135 Utilizing Computational Fluid Dynamics in the Analysis of Natural Ventilation in Buildings

Authors: A. W. J. Wong, I. H. Ibrahim

Abstract:

Increasing urbanisation has driven building designers to incorporate natural ventilation in the designs of sustainable buildings. This project utilises Computational Fluid Dynamics (CFD) to investigate the natural ventilation of an academic building, SIT@SP, using an assessment criterion based on daily mean temperature and mean velocity. The areas of interest are the pedestrian level of first and fourth levels of the building. A reference case recommended by the Architectural Institute of Japan was used to validate the simulation model. The validated simulation model was then used for coupled simulations on SIT@SP and neighbouring geometries, under two wind speeds. Both steady and transient simulations were used to identify differences in results. Steady and transient results are agreeable with the transient simulation identifying peak velocities during flow development. Under a lower wind speed, the first level was sufficiently ventilated while the fourth level was not. The first level has excessive wind velocities in the higher wind speed and the fourth level was adequately ventilated. Fourth level flow velocity was consistently lower than those of the first level. This is attributed to either simulation model error or poor building design. SIT@SP is concluded to have a sufficiently ventilated first level and insufficiently ventilated fourth level. Future works for this project extend to modifying the urban geometry, simulation model improvements, evaluation using other assessment metrics and extending the area of interest to the entire building.

Keywords: buildings, CFD Simulations, natural ventilation, urban airflow

Procedia PDF Downloads 221
4134 Energy Consumption Statistic of Gas-Solid Fluidized Beds through Computational Fluid Dynamics-Discrete Element Method Simulations

Authors: Lei Bi, Yunpeng Jiao, Chunjiang Liu, Jianhua Chen, Wei Ge

Abstract:

Two energy paths are proposed from thermodynamic viewpoints. Energy consumption means total power input to the specific system, and it can be decomposed into energy retention and energy dissipation. Energy retention is the variation of accumulated mechanical energy in the system, and energy dissipation is the energy converted to heat by irreversible processes. Based on the Computational Fluid Dynamics-Discrete Element Method (CFD-DEM) framework, different energy terms are quantified from the specific flow elements of fluid cells and particles as well as their interactions with the wall. Direct energy consumption statistics are carried out for both cold and hot flow in gas-solid fluidization systems. To clarify the statistic method, it is necessary to identify which system is studied: the particle-fluid system or the particle sub-system. For the cold flow, the total energy consumption of the particle sub-system can predict the onset of bubbling and turbulent fluidization, while the trends of local energy consumption can reflect the dynamic evolution of mesoscale structures. For the hot flow, different heat transfer mechanisms are analyzed, and the original solver is modified to reproduce the experimental results. The influence of the heat transfer mechanisms and heat source on energy consumption is also investigated. The proposed statistic method has proven to be energy-conservative and easy to conduct, and it is hopeful to be applied to other multiphase flow systems.

Keywords: energy consumption statistic, gas-solid fluidization, CFD-DEM, regime transition, heat transfer mechanism

Procedia PDF Downloads 68
4133 Characterisation of Wind-Driven Ventilation in Complex Terrain Conditions

Authors: Daniel Micallef, Damien Bounaudet, Robert N. Farrugia, Simon P. Borg, Vincent Buhagiar, Tonio Sant

Abstract:

The physical effects of upstream flow obstructions such as vegetation on cross-ventilation phenomena of a building are important for issues such as indoor thermal comfort. Modelling such effects in Computational Fluid Dynamics simulations may also be challenging. The aim of this work is to establish the cross-ventilation jet behaviour in such complex terrain conditions as well as to provide guidelines on the implementation of CFD numerical simulations in order to model complex terrain features such as vegetation in an efficient manner. The methodology consists of onsite measurements on a test cell coupled with numerical simulations. It was found that the cross-ventilation flow is highly turbulent despite the very low velocities encountered internally within the test cells. While no direct measurement of the jet direction was made, the measurements indicate that flow tends to be reversed from the leeward to the windward side. Modelling such a phenomenon proves challenging and is strongly influenced by how vegetation is modelled. A solid vegetation tends to predict better the direction and magnitude of the flow than a porous vegetation approach. A simplified terrain model was also shown to provide good comparisons with observation. The findings have important implications on the study of cross-ventilation in complex terrain conditions since the flow direction does not remain trivial, as with the traditional isolated building case.

Keywords: complex terrain, cross-ventilation, wind driven ventilation, wind resource, computational fluid dynamics, CFD

Procedia PDF Downloads 396