Search results for: numerical weather prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6269

Search results for: numerical weather prediction

4469 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records

Authors: Sara ElElimy, Samir Moustafa

Abstract:

Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).

Keywords: big data analytics, machine learning, CDRs, 5G

Procedia PDF Downloads 142
4468 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 65
4467 Development and Comparative Analysis of a New C-H Split and Recombine Micromixer

Authors: Vladimir Viktorov, Readul Mahmud, Carmen Visconte

Abstract:

In the present study, a new passive micromixer based on SAR principle, combining the operation concepts of known Chain and H mixers, called C-H micromixer, is developed and studied. The efficiency and the pressure drop of the C-H mixer along with two known SAR passive mixers named Chain and Tear-drop were investigated numerically at Reynolds numbers up to 100, taking into account species transport. At the same time experimental tests of the Chain and Tear-drop mixers were carried out at low Reynolds number, in the 0.1≤Re≤4.2 range. Numerical and experimental results coincide considerably, which validate the numerical simulation approach. Results show that mixing efficiency of the Tear-drop mixer is good except at the middle range of Reynolds number but pressure drop is too high; conversely the Chain mixer has moderate pressure drop but relatively low mixing efficiency at low and middle Re numbers. Whereas, the C-H mixer gives excellent mixing efficiency at all range of Re numbers. In addition, the C-H mixer shows respectively about 3 and 2 times lower pressure drop than the Tear-drop mixer and the Chain mixer.

Keywords: CFD, micromixing, passive micromixer, SAR

Procedia PDF Downloads 312
4466 Study of Parameters Influencing Dwell Times for Trains

Authors: Guillaume Craveur

Abstract:

The work presented here shows a study on several parameters identified as influencing dwell times for trains. Three kinds of rolling stocks are studied for this project and the parameters presented are the number of passengers, the allocation of passengers, their priorities, the platform station height, the door width and the train design. In order to make this study, a lot of records have been done in several stations in Paris (France). Then, in order to study these parameters, numerical simulations are completed. The goal is to quantify the impact of each parameter on the dwelling times. For example, this study highlights the impact of platform height and the presence of steps between the platform and the train. Three types of station platforms are concerned by this study : ‘optimum’ station platform which is 920 mm high, standard station platform which is 550 mm high, and high station platform which is 1150 mm high and different kinds of steps exist in order to fill these gaps. To conclude, this study shows the impact of these parameters on dwell times and their impact in function of the size of population.

Keywords: dwell times, numerical tools, rolling stock, platforms

Procedia PDF Downloads 337
4465 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: retail stores, faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition

Procedia PDF Downloads 162
4464 Numerical Simulation of Ultraviolet Disinfection in a Water Reactor

Authors: H. Shokouhmand, H. Sobhani, B. Sajadi, M. Degheh

Abstract:

In recent years, experimental and numerical investigation of water UV reactors has increased significantly. The main drawback of experimental methods is confined and expensive survey of UV reactors features. In this study, a CFD model utilizing the eulerian-lagrangian framework is applied to analysis the disinfection performance of a closed conduit reactor which contains four UV lamps perpendicular to the flow. A discrete ordinates (DO) model was employed to evaluate the UV irradiance field. To investigate the importance of each of lamps on the inactivation performance, in addition to the reference model (with 4 bright lamps), several models with one or two bright lamps in various arrangements were considered. All results were reported in three inactivation kinetics. The results showed that the log inactivation of the two central bright lamps model was between 88-99 percent, close to the reference model results. Also, whatever the lamps are closer to the main flow region, they have more effect on microbial inactivation. The effect of some operational parameters such as water flow rate, inlet water temperature, and lamps power were also studied.

Keywords: Eulerian-Lagrangian framework, inactivation kinetics, log inactivation, water UV reactor

Procedia PDF Downloads 252
4463 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation

Procedia PDF Downloads 137
4462 Finite Element Simulation of Embankment Bumps at Bridge Approaches, Comparison Study

Authors: F. A. Hassona, M. D. Hashem, R. I. Melek, B. M. Hakeem

Abstract:

A differential settlement at the end of a bridge near the interface between the abutment and the embankment is a persistent problem for highway agencies. The differential settlement produces the common ‘bump at the end of the bridge’. Reduction in steering response, distraction to the driver, added risk and expense to maintenance operation, and reduction in a transportation agency’s public image are all undesirable effects of these uneven and irregular transitions. This paper attempts to simulate the bump at the end of the bridge using PLAXIS finite element 2D program. PLAXIS was used to simulate a laboratory model called Bridge to Embankment Simulator of Transition (B.E.S.T.) device which was built by others to investigate this problem. A total of six numerical simulations were conducted using hardening- soil model with rational assumptions of missing soil parameters to estimate the bump at the end of the bridge. The results show good agreements between the numerical and the laboratory models. Important factors influencing bumps at bridge ends were also addressed in light of the model results.

Keywords: bridge approach slabs, bridge bump, hardening-soil, PLAXIS 2D, settlement

Procedia PDF Downloads 351
4461 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives

Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic

Abstract:

The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).

Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences

Procedia PDF Downloads 324
4460 Numerical Investigation of Heat Transfer in a Channel with Delta Winglet Vortex Generators at Different Reynolds Numbers

Authors: N. K. Singh

Abstract:

In this study the augmentation of heat transfer in a rectangular channel with triangular vortex generators is evaluated. The span wise averaged Nusselt number, mean temperature and total heat flux are compared with and without vortex generators in the channel at a blade angle of 30° for Reynolds numbers 800, 1200, 1600, and 2000. The use of vortex generators increases the span wise averaged Nusselt number compared to the case without vortex generators considerably. At a particular blade angle, increasing the Reynolds number results in an enhancement in the overall performance and span wise averaged Nusselt number was found to be greater at particular location for larger Reynolds number. The total heat flux from the bottom wall with vortex generators was found to be greater than that without vortex generators and the difference increases with increase in Reynolds number.

Keywords: heat transfer, channel with vortex generators, numerical simulation, effect of Reynolds number on heat transfer

Procedia PDF Downloads 334
4459 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.

Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability

Procedia PDF Downloads 283
4458 Numerical Study of Base Drag Reduction Using Locked Vortex Flow Management Technique for Lower Subsonic Regime

Authors: Kailas S. Jagtap, Karthik Sundarraj, Nirmal Kumar, S. Rajnarasimha, Prakash S. Kulkarni

Abstract:

The issue of turbulence base streams and the drag related to it have been of important attention for rockets, missiles, and aircraft. Different techniques are used for base drag reduction. This paper presents the numerical study of numerous drag reduction technique. The base drag or afterbody drag of bluff bodies can be reduced easily using locked vortex drag reduction technique. For bluff bodies having a cylindrical shape, the base drag is much larger compared to streamlined bodies. For such bodies using splitter plates, the vortex can be trapped between the base and the plate, which results in smooth flow. Splitter plate with round and curved corner shapes has influence in drag reduction. In this paper, the comparison is done between single splitter plate as different positions and with the bluff body. Base drag for the speed of 30m/s can be reduced about 20% to 30% by using single splitter plate as compared to the bluff body.

Keywords: base drag, bluff body, splitter plate, vortex flow, ANSYS, fluent

Procedia PDF Downloads 184
4457 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 136
4456 A Technical Solution for Micro Mixture with Micro Fluidic Oscillator in Chemistry

Authors: Brahim Dennai, Abdelhak Bentaleb, Rachid Khelfaoui, Asma Abdenbi

Abstract:

The diffusion flux given by the Fick’s law characterizethe mixing rate. A passive mixing strategy is proposed to enhance mixing of two fluids through perturbed jet low. A numerical study of passive mixers has been presented. This paper is focused on the modeling of a micro-injection systems composed of passive amplifier without mechanical part. The micro-system modeling is based on geometrical oscillators form. An asymmetric micro-oscillator design based on a monostable fluidic amplifier is proposed. The characteristic size of the channels is generally about a few hundred of microns. The numerical results indicate that the mixing performance can be as high as 99 % within a typical mixing chamber of 0.20 mm diameter inlet and 2.0 mm distance of nozzle - spliter. In addition, the results confirm that self-rotation in the circular mixer significantly enhances the mixing performance. The novel micro mixing method presented in this study provides a simple solution to mixing problems in microsystem for application in chemistry.

Keywords: micro oscillator, modeling, micro mixture, diffusion, size effect, chemical equation

Procedia PDF Downloads 435
4455 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 97
4454 Developing a Modified Version of KIVA-3V, Enabling Gaseous Injections

Authors: Hossein Keshtkar, Ali Nasiri Toosi

Abstract:

With the growing concerns about gasoline environmental pollution and also the need for a more widely available fuel source, natural gas is finding its way to the automotive engines. But before this could happen industrially, simulations of natural gas direct injection need to take place to maximize and optimize power output. KIVA is one of the most powerful tools when it comes to engine simulation. Widely accepted by both researchers and the industry, KIVA an open-source code, offers great in-depth simulation and analyzation. KIVA can compute complex phenomena’s which can occur inside the chamber before, whilst and after ignition. One downside to KIVA, is its in-capability of simulating gaseous injections, making it useful for only liquidized fuel. In this study, we developed a numerical code, to enable the simulation of gaseous injection within the KIVA code. By introducing our code as a subroutine, we modified the original KIVA program. To ensure the correct application of gaseous fuel injection using our modified KIVA code, we simulated two different cases and compared them with their experimental data. We concluded our modified version of KIVA’s simulation results came in very close to those measured experimentally.

Keywords: gaseous injections, KIVA, natural gas direct injection, numerical code, simulation

Procedia PDF Downloads 287
4453 Transformation of Hexagonal Cells into Auxetic in Core Honeycomb Furniture Panels

Authors: Jerzy Smardzewski

Abstract:

Structures with negative Poisson's ratios are called auxetic. They are characterized by better mechanical properties than conventional structures, especially shear strength, the ability to better absorb energy and increase strength during bending, especially in sandwich panels. Commonly used paper cores of cellular boards are made of hexagonal cells. With isotropic facings, these cells provide isotropic properties of the entire furniture board. Shelves made of such panels with a thickness similar to standard chipboards do not provide adequate stiffness and strength of the furniture. However, it is possible to transform the shape of hexagonal cells into polyhedral auxetic cells that improve the mechanical properties of the core. The work aimed to transform the hexagonal cells of the paper core into auxetic cells and determine their basic mechanical properties. Using numerical methods, it was decided to design the most favorable proportions of cells distinguished by the lowest Poisson's ratio and the highest modulus of linear elasticity. Standard cores for cellular boards commonly used to produce 34 mm thick furniture boards were used for the tests. Poisson's ratios, bending strength, and linear elasticity moduli were determined for such cores and boards. Then, the cells were transformed into auxetic structures, and analogous cellular boards were made for which mechanical properties were determined. The results of numerical simulations for which the variable parameters were the dimensions of the cell walls, wall inclination angles, and relative cell density were presented in the further part of the paper. Experimental tests and numerical simulations showed the beneficial effect of auxeticization on the mechanical quality of furniture panels. They allowed for the selection of the optimal shape of auxetic core cells.

Keywords: auxetics, honeycomb, panels, simulation, experiment

Procedia PDF Downloads 17
4452 Controlling the Fluid Flow in Hydrogen Fuel Cells through Material Porosity Designs

Authors: Jamal Hussain Al-Smail

Abstract:

Hydrogen fuel cells (HFCs) are environmentally friendly, energy converter devices that convert the chemical energy of the reactants (oxygen and hydrogen) to electricity through electrochemical reactions. The level of the electricity production of HFCs mainly increases depending on the oxygen distribution in the HFC’s cathode gas diffusion layer (GDL). With a constant porosity of the GDL, the electrochemical reaction can have a great variation that reduces the cell’s productivity and stability. Our findings bring a methodology in finding porosity designs of the diffusion layer to improve the oxygen distribution such that it results in a stable oxygen-hydrogen reaction. We first introduce a mathematical model involving the mass and momentum transport equations, in which a porosity function of the GDL is incorporated as a control for the fluid flow. We then derive numerical methods for solving the mathematical model. In conclusion, we present our numerical results to show how to design the GDL porosity to result in a uniform oxygen distribution.

Keywords: fuel cells, material porosity design, mathematical modeling, porous media

Procedia PDF Downloads 157
4451 Experimental and Numerical Investigation of “Machining Induced Residual Stresses” during Orthogonal Machining of Alloy Steel AISI 4340

Authors: Theena Thayalan, K. N. Ramesh Babu

Abstract:

Machining induced residual stress (RS) is one of the most important surface integrity parameters that characterize the near surface layer of a mechanical component, which plays a crucial role in controlling the performance, especially its fatigue life. Since experimental determination of RS is expensive and time consuming, it would be of great benefit if they could be predicted. In such case, it would be possible to select the cutting parameters required to produce a favorable RS profile. In the present study, an effort has been made to develop a 'two dimensional finite element model (FEM)' to simulate orthogonal cutting process and to predict surface and sub-surface RS using the commercial FEA software DEFORM-2D. The developed finite element model has been validated through experimental investigation of RS. In the experimentation, the orthogonal cutting tests were carried out on AISI 4340 by varying the cutting speed (VC) and uncut chip thickness (f) at three levels and the surface & sub-surface RS has been measured using XRD and Electro polishing techniques. The comparison showed that the RS obtained using developed numerical model is in reasonable agreement with that of experimental data.

Keywords: FEM, machining, residual stress, XRF

Procedia PDF Downloads 349
4450 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 111
4449 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder

Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu

Abstract:

Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.

Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network

Procedia PDF Downloads 155
4448 Kinematic Optimization of Energy Extraction Performances for Flapping Airfoil by Using Radial Basis Function Method and Genetic Algorithm

Authors: M. Maatar, M. Mekadem, M. Medale, B. Hadjed, B. Imine

Abstract:

In this paper, numerical simulations have been carried out to study the performances of a flapping wing used as an energy collector. Metamodeling and genetic algorithms are used to detect the optimal configuration, improving power coefficient and/or efficiency. Radial basis functions and genetic algorithms have been applied to solve this problem. Three optimization factors are controlled, namely dimensionless heave amplitude h₀, pitch amplitude θ₀ and flapping frequency f. ANSYS FLUENT software has been used to solve the principal equations at a Reynolds number of 1100, while the heave and pitch motion of a NACA0015 airfoil has been realized using a developed function (UDF). The results reveal an average power coefficient and efficiency of 0.78 and 0.338 with an inexpensive low-fidelity model and a total relative error of 4.1% versus the simulation. The performances of the simulated optimum RBF-NSGA-II have been improved by 1.2% compared with the validated model.

Keywords: numerical simulation, flapping wing, energy extraction, power coefficient, efficiency, RBF, NSGA-II

Procedia PDF Downloads 51
4447 Performance Monitoring and Environmental Impact Analysis of a Photovoltaic Power Plant: A Numerical Modeling Approach

Authors: Zahzouh Zoubir

Abstract:

The widespread adoption of photovoltaic panel systems for global electricity generation is a prominent trend. Algeria, demonstrating steadfast commitment to strategic development and innovative projects for harnessing solar energy, emerges as a pioneering force in the field. Heat and radiation, being fundamental factors in any solar system, are currently subject to comprehensive studies aiming to discern their genuine impact on crucial elements within photovoltaic systems. This endeavor is particularly pertinent given that solar module performance is exclusively assessed under meticulously defined Standard Test Conditions (STC). Nevertheless, when deployed outdoors, solar modules exhibit efficiencies distinct from those observed under STC due to the influence of diverse environmental factors. This discrepancy introduces ambiguity in performance determination, especially when surpassing test conditions. This article centers on the performance monitoring of an Algerian photovoltaic project, specifically the Oued El Keberite power (OKP) plant boasting a 15 megawatt capacity, situated in the town of Souk Ahras in eastern Algeria. The study elucidates the behavior of a subfield within this facility throughout the year, encompassing various conditions beyond the STC framework. To ensure the optimal efficiency of solar panels, this study integrates crucial factors, drawing on an authentic technical sheet from the measurement station of the OKP photovoltaic plant. Numerical modeling and simulation of a sub-field of the photovoltaic station were conducted using MATLAB Simulink. The findings underscore how radiation intensity and temperature, whether low or high, impact the short-circuit current, open-circuit voltage; fill factor, and overall efficiency of the photovoltaic system.

Keywords: performance monitoring, photovoltaic system, numerical modeling, radiation intensity

Procedia PDF Downloads 73
4446 Effect of Elastic Modulus Anisotropy on Foundation Behavior Reinforced with Geogrid in Sandy Soil

Authors: Reza Ziaie Moayed, Javad Shamsi Soosahab

Abstract:

The bearing capacity of shallow foundations is one of the interesting subjects in geotechnical engineering. Soil improvement by geosynthetic reinforcements is a modern method used in different projects to improve the bearing capacity of foundations. In this paper, numerical study is adopted to investigate the effect of geogrid soil reinforcement on shallow foundation behavior resting on anisotropic sand with using a finite element limit analysis software. The effect of the ratio of horizontal elastic modulus with respect to vertical elastic modulus (EH/EV) investigates on bearing capacity of foundations. The results illustrate that in sandy soils, the anisotropic ratio of elastic modulus (EH/EV) has notable effect on bearing capacity of shallow foundations. Also, based on the results of this study, it was concluded that geogrid could be used as soil reinforcement elements to improve the bearing of sandy soils and reduce its settlement possible remarkably.

Keywords: shallow foundations, bearing capacity, numerical study, soil anisotropy, geogrid

Procedia PDF Downloads 155
4445 Numerical Investigation for External Strengthening of Dapped-End Beams

Authors: A. Abdel-Moniem, H. Madkour, K. Farah, A. Abdullah

Abstract:

The reduction in dapped end beams depth nearby the supports tends to produce stress concentration and hence results in shear cracks, if it does not have an adequate reinforcement detailing. This study investigates numerically the efficiency of applying different external strengthening techniques to the dapped end of such beams. A two-dimensional finite element model was built to predict the structural behavior of dapped ends strengthened with different techniques. The techniques included external bonding of the steel angle at the re-entrant corner, un-bounded bolt anchoring, external steel plate jacketing, exterior carbon fiber wrapping and/or stripping and external inclined steel plates. The FE analysis results are then presented in terms of the ultimate load capacities, load-deflection and crack pattern at failure. The results showed that the FE model, at various stages, was found to be comparable to the available test data. Moreover, it enabled the capture of the failure progress, with acceptable accuracy, which is very difficult in a laboratory test.

Keywords: dapped-end beams, finite element, shear failure, strengthening techniques, reinforced concrete, numerical investigation

Procedia PDF Downloads 120
4444 Numerical Simulation of Flexural Strength of Steel Fiber Reinforced High Volume Fly Ash Concrete by Finite Element Analysis

Authors: Mahzabin Afroz, Indubhushan Patnaikuni, Srikanth Venkatesan

Abstract:

It is well-known that fly ash can be used in high volume as a partial replacement of cement to get beneficial effects on concrete. High volume fly ash (HVFA) concrete is currently emerging as a popular option to strengthen by fiber. Although studies have supported the use of fibers with fly ash, a unified model along with the incorporation into finite element software package to estimate the maximum flexural loads need to be developed. In this study, nonlinear finite element analysis of steel fiber reinforced high strength HVFA concrete beam under static loadings was conducted to investigate their failure modes in terms of ultimate load. First of all, the experimental investigation of mechanical properties of high strength HVFA concrete was done and validates with developed numerical model with the appropriate modeling of element size and mesh by ANSYS 16.2. To model the fiber within the concrete, three-dimensional random fiber distribution was simulated by spherical coordinate system. Three types of high strength HVFA concrete beams were analyzed reinforced with 0.5, 1 and 1.5% volume fractions of steel fibers with specific mechanical and physical properties. The result reveals that the use of nonlinear finite element analysis technique and three-dimensional random fiber orientation exhibited fairly good agreement with the experimental results of flexural strength, load deflection and crack propagation mechanism. By utilizing this improved model, it is possible to determine the flexural behavior of different types and proportions of steel fiber reinforced HVFA concrete beam under static load. So, this paper has the originality to predict the flexural properties of steel fiber reinforced high strength HVFA concrete by numerical simulations.

Keywords: finite element analysis, high volume fly ash, steel fibers, spherical coordinate system

Procedia PDF Downloads 140
4443 Effect of Rotation Rate on Chemical Segregation during Phase Change

Authors: Nouri Sabrina, Benzeghiba Mohamed, Ghezal Abderrahmane

Abstract:

Numerical parametric study is conducted to study the effects of ampoule rotation on the flows and the dopant segregation in vertical Bridgman (VB) crystal growth. Calculations were performed in unsteady state. The extended Darcy model, which includes the time derivative and Coriolis terms, has been employed in the momentum equation. It was found that the convection, and dopant segregation can be affected significantly by ampoule rotation, and the effect is similar to that by an axial magnetic field. Ampoule rotation decreases the intensity of convection and stretches the flow cell axially. When the convection is weak, the flow can be suppressed almost completely by moderate ampoule rotation and the dopant segregation becomes diffusion-controlled. For stronger convection, the elongated flow cell by ampoule rotation may bring dopant mixing into the bulk melt reducing axial segregation at the early stage of the growth. However, if the cellular flow cannot be suppressed completely, ampoule rotation may induce larger radial segregation due to poor mixing.

Keywords: numerical simulation, heat and mass transfer, vertical solidification, chemical segregation

Procedia PDF Downloads 353
4442 Numerical Evaluation of the Flow Behavior inside the Scrubber Unit with Engine Exhaust Pipe

Authors: Kumaresh Selvakumar, Man Young Kim

Abstract:

A wet scrubber is an air pollution control device that removes particulate matter and acid gases from waste gas streams found in marine engine exhaust. If the flue gases in the exhaust is employed for CFD simulation, it makes the problem complicate due to the involvement of emissions. Owing to the fact, the scrubber system in this paper is handled with appropriate approach by designing with the flow properties of hot air and water droplet injections to evaluate the flow behavior inside the system. Since the wet scrubber has the capability of operating over wide range of mixture compositions, the current scrubber model with the designing approach doesn’t deviate from the actual behavior of the system. The scrubber design is constructed with engine exhaust pipe with the purpose of measuring the flow properties inside the scrubber by the influence of exhaust pipe characteristics. The flow properties are computed by the thermodynamic variables such as temperature and pressure with the flow velocity. In this work, numerical analyses have been conducted for the flow of fluid in the scrubber system through CFD technique.

Keywords: wet scrubber, water droplet injections, thermodynamic variables, CFD technique

Procedia PDF Downloads 347
4441 Numerical Modeling and Prediction of Nanoscale Transport Phenomena in Vertically Aligned Carbon Nanotube Catalyst Layers by the Lattice Boltzmann Simulation

Authors: Seungho Shin, Keunwoo Choi, Ali Akbar, Sukkee Um

Abstract:

In this study, the nanoscale transport properties and catalyst utilization of vertically aligned carbon nanotube (VACNT) catalyst layers are computationally predicted by the three-dimensional lattice Boltzmann simulation based on the quasi-random nanostructural model in pursuance of fuel cell catalyst performance improvement. A series of catalyst layers are randomly generated with statistical significance at the 95% confidence level to reflect the heterogeneity of the catalyst layer nanostructures. The nanoscale gas transport phenomena inside the catalyst layers are simulated by the D3Q19 (i.e., three-dimensional, 19 velocities) lattice Boltzmann method, and the corresponding mass transport characteristics are mathematically modeled in terms of structural properties. Considering the nanoscale reactant transport phenomena, a transport-based effective catalyst utilization factor is defined and statistically analyzed to determine the structure-transport influence on catalyst utilization. The tortuosity of the reactant mass transport path of VACNT catalyst layers is directly calculated from the streaklines. Subsequently, the corresponding effective mass diffusion coefficient is statistically predicted by applying the pre-estimated tortuosity factors to the Knudsen diffusion coefficient in the VACNT catalyst layers. The statistical estimation results clearly indicate that the morphological structures of VACNT catalyst layers reduce the tortuosity of reactant mass transport path when compared to conventional catalyst layer and significantly improve consequential effective mass diffusion coefficient of VACNT catalyst layer. Furthermore, catalyst utilization of the VACNT catalyst layer is substantially improved by enhanced mass diffusion and electric current paths despite the relatively poor interconnections of the ion transport paths.

Keywords: Lattice Boltzmann method, nano transport phenomena, polymer electrolyte fuel cells, vertically aligned carbon nanotube

Procedia PDF Downloads 202
4440 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 82