Search results for: computational geometry
474 Variant Selection and Pre-transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel
Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury
Abstract:
Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its micro structure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation
Procedia PDF Downloads 489473 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 40472 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 334471 Next Generation UK Storm Surge Model for the Insurance Market: The London Case
Authors: Iacopo Carnacina, Mohammad Keshtpoor, Richard Yablonsky
Abstract:
Non-structural protection measures against flooding are becoming increasingly popular flood risk mitigation strategies. In particular, coastal flood insurance impacts not only private citizens but also insurance and reinsurance companies, who may require it to retain solvency and better understand the risks they face from a catastrophic coastal flood event. In this context, a framework is presented here to assess the risk for coastal flooding across the UK. The area has a long history of catastrophic flood events, including the Great Flood of 1953 and the 2013 Cyclone Xaver storm, both of which led to significant loss of life and property. The current framework will leverage a technology based on a hydrodynamic model (Delft3D Flexible Mesh). This flexible mesh technology, coupled with a calibration technique, allows for better utilisation of computational resources, leading to higher resolution and more detailed results. The generation of a stochastic set of extra tropical cyclone (ETC) events supports the evaluation of the financial losses for the whole area, also accounting for correlations between different locations in different scenarios. Finally, the solution shows a detailed analysis for the Thames River, leveraging the information available on flood barriers and levees. Two realistic disaster scenarios for the Greater London area are simulated: In the first scenario, the storm surge intensity is not high enough to fail London’s flood defences, but in the second scenario, London’s flood defences fail, highlighting the potential losses from a catastrophic coastal flood event.Keywords: storm surge, stochastic model, levee failure, Thames River
Procedia PDF Downloads 232470 Localized Recharge Modeling of a Coastal Aquifer from a Dam Reservoir (Korba, Tunisia)
Authors: Nejmeddine Ouhichi, Fethi Lachaal, Radhouane Hamdi, Olivier Grunberger
Abstract:
Located in Cap Bon peninsula (Tunisia), the Lebna dam was built in 1987 to balance local water salt intrusion taking place in the coastal aquifer of Korba. The first intention was to reduce coastal groundwater over-pumping by supplying surface water to a large irrigation system. The unpredicted beneficial effect was recorded with the occurrence of a direct localized recharge to the coastal aquifer by leakage through the geological material of the southern bank of the lake. The hydrological balance of the reservoir dam gave an estimation of the annual leakage volume, but dynamic processes and sound quantification of recharge inputs are still required to understand the localized effect of the recharge in terms of piezometry and quality. Present work focused on simulating the recharge process to confirm the hypothesis, and established a sound quantification of the water supply to the coastal aquifer and extend it to multi-annual effects. A spatial frame of 30km² was used for modeling. Intensive outcrops and geophysical surveys based on 68 electrical resistivity soundings were used to characterize the aquifer 3D geometry and the limit of the Plio-quaternary geological material concerned by the underground flow paths. Permeabilities were determined using 17 pumping tests on wells and piezometers. Six seasonal piezometric surveys on 71 wells around southern reservoir dam banks were performed during the 2019-2021 period. Eight monitoring boreholes of high frequency (15min) piezometric data were used to examine dynamical aspects. Model boundary conditions were specified using the geophysics interpretations coupled with the piezometric maps. The dam-groundwater flow model was performed using Visual MODFLOW software. Firstly, permanent state calibration based on the first piezometric map of February 2019 was established to estimate the permanent flow related to the different reservoir levels. Secondly, piezometric data for the 2019-2021 period were used for transient state calibration and to confirm the robustness of the model. Preliminary results confirmed the temporal link between the reservoir level and the localized recharge flow with a strong threshold effect for levels below 16 m.a.s.l. The good agreement of computed flow through recharge cells on the southern banks and hydrological budget of the reservoir open the path to future simulation scenarios of the dilution plume imposed by the localized recharge. The dam reservoir-groundwater flow-model simulation results approve a potential for storage of up to 17mm/year in existing wells, under gravity-feed conditions during level increases on the reservoir into the three years of operation. The Lebna dam groundwater flow model characterized a spatiotemporal relation between groundwater and surface water.Keywords: leakage, MODFLOW, saltwater intrusion, surface water-groundwater interaction
Procedia PDF Downloads 138469 Combustion and Emissions Performance of Syngas Fuels Derived from Palm Kernel Shell and Polyethylene (PE) Waste via Catalytic Steam Gasification
Authors: Chaouki Ghenai
Abstract:
Computational fluid dynamics analysis of the burning of syngas fuels derived from biomass and plastic solid waste mixture through gasification process is presented in this paper. The syngas fuel is burned in gas turbine can combustor. Gas turbine can combustor with swirl is designed to burn the fuel efficiently and reduce the emissions. The main objective is to test the impact of the alternative syngas fuel compositions and lower heating value on the combustion performance and emissions. The syngas fuel is produced by blending Palm Kernel Shell (PKS) with Polyethylene (PE) waste via catalytic steam gasification (fluidized bed reactor). High hydrogen content syngas fuel was obtained by mixing 30% PE waste with PKS. The syngas composition obtained through the gasification process is 76.2% H2, 8.53% CO, 4.39% CO2 and 10.90% CH4. The lower heating value of the syngas fuel is LHV = 15.98 MJ/m3. Three fuels were tested in this study natural gas (100%CH4), syngas fuel and pure hydrogen (100% H2). The power from the combustor was kept constant for all the fuels tested in this study. The effect of syngas fuel composition and lower heating value on the flame shape, gas temperature, mass of carbon dioxide (CO2) and nitrogen oxides (NOX) per unit of energy generation is presented in this paper. The results show an increase of the peak flame temperature and NO mass fractions for the syngas and hydrogen fuels compared to natural gas fuel combustion. Lower average CO2 emissions at the exit of the combustor are obtained for the syngas compared to the natural gas fuel.Keywords: CFD, combustion, emissions, gas turbine combustor, gasification, solid waste, syngas, waste to energy
Procedia PDF Downloads 593468 Computational Fluid Dynamics Simulations and Analysis of Air Bubble Rising in a Column of Liquid
Authors: Baha-Aldeen S. Algmati, Ahmed R. Ballil
Abstract:
Multiphase flows occur widely in many engineering and industrial processes as well as in the environment we live in. In particular, bubbly flows are considered to be crucial phenomena in fluid flow applications and can be studied and analyzed experimentally, analytically, and computationally. In the present paper, the dynamic motion of an air bubble rising within a column of liquid is numerically simulated using an open-source CFD modeling tool 'OpenFOAM'. An interface tracking numerical algorithm called MULES algorithm, which is built-in OpenFOAM, is chosen to solve an appropriate mathematical model based on the volume of fluid (VOF) numerical method. The bubbles initially have a spherical shape and starting from rest in the stagnant column of liquid. The algorithm is initially verified against numerical results and is also validated against available experimental data. The comparison revealed that this algorithm provides results that are in a very good agreement with the 2D numerical data of other CFD codes. Also, the results of the bubble shape and terminal velocity obtained from the 3D numerical simulation showed a very good qualitative and quantitative agreement with the experimental data. The simulated rising bubbles yield a very small percentage of error in the bubble terminal velocity compared with the experimental data. The obtained results prove the capability of OpenFOAM as a powerful tool to predict the behavior of rising characteristics of the spherical bubbles in the stagnant column of liquid. This will pave the way for a deeper understanding of the phenomenon of the rise of bubbles in liquids.Keywords: CFD simulations, multiphase flows, OpenFOAM, rise of bubble, volume of fluid method, VOF
Procedia PDF Downloads 123467 Prediction of California Bearing Ratio of a Black Cotton Soil Stabilized with Waste Glass and Eggshell Powder using Artificial Neural Network
Authors: Biruhi Tesfaye, Avinash M. Potdar
Abstract:
The laboratory test process to determine the California bearing ratio (CBR) of black cotton soils is not only overpriced but also time-consuming as well. Hence advanced prediction of CBR plays a significant role as it is applicable In pavement design. The prediction of CBR of treated soil was executed by Artificial Neural Networks (ANNs) which is a Computational tool based on the properties of the biological neural system. To observe CBR values, combined eggshell and waste glass was added to soil as 4, 8, 12, and 16 % of the weights of the soil samples. Accordingly, the laboratory related tests were conducted to get the required best model. The maximum CBR value found at 5.8 at 8 % of eggshell waste glass powder addition. The model was developed using CBR as an output layer variable. CBR was considered as a function of the joint effect of liquid limit, plastic limit, and plastic index, optimum moisture content and maximum dry density. The best model that has been found was ANN with 5, 6 and 1 neurons in the input, hidden and output layer correspondingly. The performance of selected ANN has been 0.99996, 4.44E-05, 0.00353 and 0.0067 which are correlation coefficient (R), mean square error (MSE), mean absolute error (MAE) and root mean square error (RMSE) respectively. The research presented or summarized above throws light on future scope on stabilization with waste glass combined with different percentages of eggshell that leads to the economical design of CBR acceptable to pavement sub-base or base, as desired.Keywords: CBR, artificial neural network, liquid limit, plastic limit, maximum dry density, OMC
Procedia PDF Downloads 190466 The Assessment of Natural Ventilation Performance for Thermal Comfort in Educational Space: A Case Study of Design Studio in the Arab Academy for Science and Technology, Alexandria
Authors: Alaa Sarhan, Rania Abd El Gelil, Hana Awad
Abstract:
Through the last decades, the impact of thermal comfort on the working performance of users and occupants of an indoor space has been a concern. Research papers concluded that natural ventilation quality directly impacts the levels of thermal comfort. Natural ventilation must be put into account during the design process in order to improve the inhabitant's efficiency and productivity. One example of daily long-term occupancy spaces is educational facilities. Many individuals spend long times receiving a considerable amount of knowledge, and it takes additional time to apply this knowledge. Thus, this research is concerned with user's level of thermal comfort in design studios of educational facilities. The natural ventilation quality in spaces is affected by a number of parameters including orientation, opening design, and many other factors. This research aims to investigate the conscious manipulation of the physical parameters of the spaces and its impact on natural ventilation performance which subsequently affects thermal comfort of users. The current research uses inductive and deductive methods to define natural ventilation design considerations, which are used in a field study in a studio in the university building in Alexandria (AAST) to evaluate natural ventilation performance through analyzing and comparing the current case to the developed framework and conducting computational fluid dynamics simulation. Results have proved that natural ventilation performance is successful by only 50% of the natural ventilation design framework; these results are supported by CFD simulation.Keywords: educational buildings, natural ventilation, , mediterranean climate, thermal comfort
Procedia PDF Downloads 221465 A Dual Spark Ignition Timing Influence for the High Power Aircraft Radial Engine Using a CFD Transient Modeling
Authors: Tytus Tulwin, Ksenia Siadkowska, Rafał Sochaczewski
Abstract:
A high power radial reciprocating engine is characterized by a large displacement volume of a combustion chamber. Choosing the right moment for ignition is important for a high performance or high reliability and ignition certainty. This work shows methods of simulating ignition process and its impact on engine parameters. For given conditions a flame speed is limited when a deflagration combustion takes place. Therefore, a larger length scale of the combustion chamber compared to a standard size automotive engine makes combustion take longer time to propagate. In order to speed up the mixture burn-up time the second spark is introduced. The transient Computational Fluid Dynamics model capable of simulating multicycle engine processes was developed. The CFD model consists of ECFM-3Z combustion and species transport models. A relative ignition timing difference for the both spark sources is constant. The temperature distribution on engine walls was calculated in the separate conjugate heat transfer simulation. The in-cylinder pressure validation was performed for take-off power flight conditions. The influence of ignition timing on parameters like in-cylinder temperature or rate of heat release was analyzed. The most advantageous spark timing for the highest power output was chosen. The conditions around the spark plug locations for the pre-ignition period were analyzed. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: CFD, combustion, ignition, simulation, timing
Procedia PDF Downloads 296464 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership
Procedia PDF Downloads 177463 Autonomous Flight Control for Multirotor by Alternative Input Output State Linearization with Nested Saturations
Authors: Yong Eun Yoon, Eric N. Johnson, Liling Ren
Abstract:
Multirotor is one of the most popular types of small unmanned aircraft systems and has already been used in many areas including transport, military, surveillance, and leisure. Together with its popularity, the needs for proper flight control is growing because in most applications it is required to conduct its missions autonomously, which is in many aspects based on autonomous flight control. There have been many studies about the flight control for multirotor, but there is still room for enhancements in terms of performance and efficiency. This paper presents an autonomous flight control method for multirotor based on alternative input output linearization coupled with nested saturations. With alternative choice of the output of the multirotor flight control system, we can reduce computational cost regarding Lie algebra, and the linearized system can be stabilized with the introduction of nested saturations with real poles of our own design. Stabilization of internal dynamics is also based on the nested saturations and accompanies the determination of part of desired states. In particular, outer control loops involving state variables which originally are not included in the output of the flight control system is naturally rendered through this internal dynamics stabilization. We can also observe that desired tilting angles are determined by error dynamics from outer loops. Simulation results show that in any tracking situations multirotor stabilizes itself with small time constants, preceded by tuning process for control parameters with relatively low degree of complexity. Future study includes control of piecewise linear behavior of multirotor with actuator saturations, and the optimal determination of desired states while tracking multiple waypoints.Keywords: automatic flight control, input output linearization, multirotor, nested saturations
Procedia PDF Downloads 228462 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions
Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang
Abstract:
Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.Keywords: computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity
Procedia PDF Downloads 140461 Enhancing Embedded System Efficiency with Digital Signal Processing Cores
Authors: Anil H. Dhanawade, Akshay S., Harshal M. Lakesar
Abstract:
This paper presents a comprehensive analysis of the performance advantages offered by DSP (Digital Signal Processing) cores compared to traditional MCU (Microcontroller Unit) cores in the execution of various functions critical to real-time applications. The focus is on the integration of DSP functionalities, specifically in the context of motor control applications such as Field-Oriented Control (FOC), trigonometric calculations, back-EMF estimation, digital filtering, and high-resolution PWM generation. Through comparative analysis, it is demonstrated that DSP cores significantly enhance processing efficiency, achieving faster execution times for complex mathematical operations essential for precise torque and speed control. The study highlights the capabilities of DSP cores, including single-cycle Multiply-Accumulate (MAC) operations and optimized hardware for trigonometric functions, which collectively reduce latency and improve real-time performance. In contrast, MCU cores, while capable of performing similar tasks, typically exhibit longer execution times due to reliance on software-based solutions and lack of dedicated hardware acceleration. The findings underscore the critical role of DSP cores in applications requiring high-speed processing and low-latency response, making them indispensable in the automotive, industrial, and robotics sectors. This work serves as a reference for future developments in embedded systems, emphasizing the importance of architecture choice in achieving optimal performance in demanding computational tasks.Keywords: CPU core, DSP, assembly code, motor control
Procedia PDF Downloads 16460 Numerical Simulation of a Combined Impact of Cooling and Ventilation on the Indoor Environmental Quality
Authors: Matjaz Prek
Abstract:
Impact of three different combinations of cooling and ventilation systems on the indoor environmental quality (IEQ) has been studied. Comparison of chilled ceiling cooling in combination with displacement ventilation, cooling with fan coil unit and cooling with flat wall displacement outlets was performed. All three combinations were evaluated from the standpoint of whole-body and local thermal comfort criteria as well as from the standpoint of ventilation effectiveness. The comparison was made on the basis of numerical simulation with DesignBuilder and Fluent. Numerical simulations were carried out in two steps. Firstly the DesignBuilder software environment was used to model the buildings thermal performance and evaluation of the interaction between the environment and the building. Heat gains of the building and of the individual space, as well as the heat loss on the boundary surfaces in the room, were calculated. In the second step Fluent software environment was used to simulate the response of the indoor environment, evaluating the interaction between building and human, using the simulation results obtained in the first step. Among the systems presented, the ceiling cooling system in combination with displacement ventilation was found to be the most suitable as it offers a high level of thermal comfort with adequate ventilation efficiency. Fan coil cooling has proved inadequate from the standpoint of thermal comfort whereas flat wall displacement outlets were inadequate from the standpoint of ventilation effectiveness. The study showed the need in evaluating indoor environment not solely from the energy use point of view, but from the point of view of indoor environmental quality as well.Keywords: cooling, ventilation, thermal comfort, ventilation effectiveness, indoor environmental quality, IEQ, computational fluid dynamics
Procedia PDF Downloads 187459 Predictions of Thermo-Hydrodynamic State for Single and Three Pads Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations
Authors: Tai Yuan Yu, Pei-Jen Wang
Abstract:
Oil-free turbomachinery is considered one of the critical technologies for future green power generation systems as rotor machinery systems. Oil-free technology allows clean, compact, and maintenance-free working, and gas foil bearings, abbreviated as GFBs, are important for the technology. Since the first applications in the auxiliary power units and air cycle machines in the 1970s, obvious improvement has been created to the computational models for dynamic rotor behavior. However, many technical issues are still poorly understood or remain unsolved, and some of those are thermal management and the pattern of how pressure will be distributed in bearing clearance. This paper presents a three-dimensional, abbreviated as 3D, fluid-structure interaction model of single pad foil bearings and three pad foil bearings to predict bearing working behavior that researchers could compare characteristics of those. The coupling analysis model involves dynamic working characteristics applied to all the gas film and mechanical structures. Therefore, the elastic deformation of foil structure and the hydrodynamic pressure of gas film can both be calculated by a finite element method program. As a result, the temperature distribution pattern could also be iteratively solved by coupling analysis. In conclusion, the working fluid state in a gas film of various pad forms of bearings working characteristic at constant rotational speed for both can be solved for comparisons with the experimental results.Keywords: fluid-structure interaction, multi-physics simulations, gas foil bearing, oil-free, transient thermo-hydrodynamic
Procedia PDF Downloads 163458 Development of a CFD Model for PCM Based Energy Storage in a Vertical Triplex Tube Heat Exchanger
Authors: Pratibha Biswal, Suyash Morchhale, Anshuman Singh Yadav, Shubham Sanjay Chobe
Abstract:
Energy demands are increasing whereas energy sources, especially non-renewable sources are limited. Due to the intermittent nature of renewable energy sources, it has become the need of the hour to find new ways to store energy. Out of various energy storage methods, latent heat thermal storage devices are becoming popular due to their high energy density per unit mass and volume at nearly constant temperature. This work presents a computational fluid dynamics (CFD) model using ANSYS FLUENT 19.0 for energy storage characteristics of a phase change material (PCM) filled in a vertical triplex tube thermal energy storage system. A vertical triplex tube heat exchanger, just like its name consists of three concentric tubes (pipe sections) for parting the device into three fluid domains. The PCM is filled in the middle domain with heat transfer fluids flowing in the outer and innermost domains. To enhance the heat transfer inside the PCM, eight fins have been incorporated between the internal and external tubes. These fins run radially outwards from the outer-wall of innermost tube to the inner-wall of the middle tube dividing the middle domain (between innermost and middle tube) into eight sections. These eight sections are then filled with a PCM. The validation is carried with earlier work and a grid independence test is also presented. Further studies on freezing and melting process were carried out. The results are presented in terms of pictorial representation of isotherms and liquid fractionKeywords: heat exchanger, thermal energy storage, phase change material, CFD, latent heat
Procedia PDF Downloads 153457 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees
Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel
Abstract:
Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine
Procedia PDF Downloads 204456 Fluid-Structure Interaction Analysis of a Vertical Axis Wind Turbine Blade Made with Natural Fiber Based Composite Material
Authors: Ivan D. Ortega, Juan D. Castro, Alberto Pertuz, Manuel Martinez
Abstract:
One of the problems considered when scientists talk about climate change is the necessity of utilizing renewable sources of energy, on this category there are many approaches to the problem, one of them is wind energy and wind turbines whose designs have frequently changed along many years trying to achieve a better overall performance on different conditions. From that situation, we get the two main types known today: Vertical and Horizontal axis wind turbines, which have acronyms VAWT and HAWT, respectively. This research aims to understand how well suited a composite material, which is still in development, made with natural origin fibers is for its implementation on vertical axis wind turbines blades under certain wind loads. The study consisted on acquiring the mechanical properties of the materials to be used which where bactris guineenis, also known as pama de lata in Colombia, and adhesive that acts as the matrix which had not been previously studied to the point required for this project. Then, a simplified 3D model of the airfoil was developed and tested under some preliminary loads using finite element analysis (FEA), these loads were acquired in the Colombian Chicamocha Canyon. Afterwards, a more realistic pressure profile was obtained using computational fluid dynamics which took into account the 3D shape of the complete blade and its rotation. Finally, the blade model was subjected to the wind loads using what is known as one way fluidstructure interaction (FSI) and its behavior analyzed to draw conclusions. The observed overall results were positive since the material behaved fairly as expected. Data suggests the material would be really useful in this kind of applications in small to medium size turbines if it is given more attention and time to develop.Keywords: CFD, FEA, FSI, natural fiber, VAWT
Procedia PDF Downloads 226455 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation
Procedia PDF Downloads 158454 Experimental and Numerical Studies of Droplet Formation
Authors: Khaled Al-Badani, James Ren, Lisa Li, David Allanson
Abstract:
Droplet formation is an important process in many engineering systems and manufacturing procedures, which includes welding, biotechnologies, 3D printing, biochemical, biomedical fields and many more. The volume and the characteristics of droplet formation are generally depended on various material properties, microfluidics and fluid mechanics considerations. Hence, a detailed investigation of this process, with the aid of numerical computational tools, are essential for future design optimization and process controls of many engineering systems. This will also improve the understanding of changes in the properties and the structures of materials, during the formation of the droplet, which is important for new material developments to achieve different functions, pending the requirements of the application. For example, the shape of the formed droplet is critical for the function of some final products, such as the welding nugget during Capacitor Discharge Welding process, or PLA 3D printing, etc. Although, most academic journals on droplet formation, focused on issued with material transfer rate, surface tension and residual stresses, the general emphasis on the characteristics of droplet shape has been overlooked. The proposed work for this project will examine theoretical methodologies, experimental techniques, and numerical modelling, using ANSYS FLUENT, to critically analyse and highlight optimization methods regarding the formation of pendant droplet. The project will also compare results from published data with experimental and numerical work, concerning the effects of key material parameters on the droplet shape. These effects include changes in heating/cooling rates, solidification/melting progression and separation/break-up times. From these tests, a set of objectives is prepared, with an intention of improving quality, stability and productivity in modelling metal welding and 3D printing.Keywords: computer modelling, droplet formation, material distortion, materials forming, welding
Procedia PDF Downloads 286453 Improving Patient-Care Services at an Oncology Center with a Flexible Adaptive Scheduling Procedure
Authors: P. Hooshangitabrizi, I. Contreras, N. Bhuiyan
Abstract:
This work presents an online scheduling problem which accommodates multiple requests of patients for chemotherapy treatments in a cancer center of a major metropolitan hospital in Canada. To solve the problem, an adaptive flexible approach is proposed which systematically combines two optimization models. The first model is intended to dynamically schedule arriving requests in the form of waiting lists whereas the second model is used to reschedule the already booked patients with the goal of finding better resource allocations when new information becomes available. Both models are created as mixed integer programming formulations. Various controllable and flexible parameters such as deviating the prescribed target dates by a pre-determined threshold, changing the start time of already booked appointments and the maximum number of appointments to move in the schedule are included in the proposed approach to have sufficient degrees of flexibility in handling arrival requests and unexpected changes. Several computational experiments are conducted to evaluate the performance of the proposed approach using historical data provided by the oncology clinic. Our approach achieves outstandingly better results as compared to those of the scheduling system being used in practice. Moreover, several analyses are conducted to evaluate the effect of considering different levels of flexibility on the obtained results and to assess the performance of the proposed approach in dealing with last-minute changes. We strongly believe that the proposed flexible adaptive approach is very well-suited for implementation at the clinic to provide better patient-care services and to utilize available resource more efficiently.Keywords: chemotherapy scheduling, multi-appointment modeling, optimization of resources, satisfaction of patients, mixed integer programming
Procedia PDF Downloads 168452 Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints
Authors: Lijuan Zhou, Mengqi Wu, Changyong Niu
Abstract:
Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method.Keywords: attributed graph clustering, contrastive learning, clustering-oriented, self-supervised learning
Procedia PDF Downloads 53451 Influence of Internal Topologies on Components Produced by Selective Laser Melting: Numerical Analysis
Authors: C. Malça, P. Gonçalves, N. Alves, A. Mateus
Abstract:
Regardless of the manufacturing process used, subtractive or additive, material, purpose and application, produced components are conventionally solid mass with more or less complex shape depending on the production technology selected. Aspects such as reducing the weight of components, associated with the low volume of material required and the almost non-existent material waste, speed and flexibility of production and, primarily, a high mechanical strength combined with high structural performance, are competitive advantages in any industrial sector, from automotive, molds, aviation, aerospace, construction, pharmaceuticals, medicine and more recently in human tissue engineering. Such features, properties and functionalities are attained in metal components produced using the additive technique of Rapid Prototyping from metal powders commonly known as Selective Laser Melting (SLM), with optimized internal topologies and varying densities. In order to produce components with high strength and high structural and functional performance, regardless of the type of application, three different internal topologies were developed and analyzed using numerical computational tools. The developed topologies were numerically submitted to mechanical compression and four point bending testing. Finite Element Analysis results demonstrate how different internal topologies can contribute to improve mechanical properties, even with a high degree of porosity relatively to fully dense components. Results are very promising not only from the point of view of mechanical resistance, but especially through the achievement of considerable variation in density without loss of structural and functional high performance.Keywords: additive manufacturing, internal topologies, porosity, rapid prototyping, selective laser melting
Procedia PDF Downloads 331450 Trajectory Optimization of Re-Entry Vehicle Using Evolutionary Algorithm
Authors: Muhammad Umar Kiani, Muhammad Shahbaz
Abstract:
Performance of any vehicle can be predicted by its design/modeling and optimization. Design optimization leads to efficient performance. Followed by horizontal launch, the air launch re-entry vehicle undergoes a launch maneuver by introducing a carefully selected angle of attack profile. This angle of attack profile is the basic element to complete a specified mission. Flight program of said vehicle is optimized under the constraints of the maximum allowed angle of attack, lateral and axial loads and with the objective of reaching maximum altitude. The main focus of this study is the endo-atmospheric phase of the ascent trajectory. A three degrees of freedom trajectory model is simulated in MATLAB. The optimization process uses evolutionary algorithm, because of its robustness and efficient capacity to explore the design space in search of the global optimum. Evolutionary Algorithm based trajectory optimization also offers the added benefit of being a generalized method that may work with continuous, discontinuous, linear, and non-linear performance matrix. It also eliminates the requirement of a starting solution. Optimization is particularly beneficial to achieve maximum advantage without increasing the computational cost and affecting the output of the system. For the case of launch vehicles we are immensely anxious to achieve maximum performance and efficiency under different constraints. In a launch vehicle, flight program means the prescribed variation of vehicle pitching angle during the flight which has substantial influence reachable altitude and accuracy of orbit insertion and aerodynamic loading. Results reveal that the angle of attack profile significantly affects the performance of the vehicle.Keywords: endo-atmospheric, evolutionary algorithm, efficient performance, optimization process
Procedia PDF Downloads 405449 In Silico Study of Antiviral Drugs Against Three Important Proteins of Sars-Cov-2 Using Molecular Docking Method
Authors: Alireza Jalalvand, Maryam Saleh, Somayeh Behjat Khatouni, Zahra Bahri Najafi, Foroozan Fatahinia, Narges Ismailzadeh, Behrokh Farahmand
Abstract:
Object: In the last two decades, the recent outbreak of Coronavirus (SARS-CoV-2) imposed a global pandemic in the world. Despite the increasing prevalence of the disease, there are no effective drugs to treat it. A suitable and rapid way to afford an effective drug and treat the global pandemic is a computational drug study. This study used molecular docking methods to examine the potential inhibition of over 50 antiviral drugs against three fundamental proteins of SARS-CoV-2. METHODS: Through a literature review, three important proteins (a key protease, RNA-dependent RNA polymerase (RdRp), and spike) were selected as drug targets. Three-dimensional (3D) structures of protease, spike, and RdRP proteins were obtained from the Protein Data Bank. Protein had minimal energy. Over 50 antiviral drugs were considered candidates for protein inhibition and their 3D structures were obtained from drug banks. The Autodock 4.2 software was used to define the molecular docking settings and run the algorithm. RESULTS: Five drugs, including indinavir, lopinavir, saquinavir, nelfinavir, and remdesivir, exhibited the highest inhibitory potency against all three proteins based on the binding energies and drug binding positions deduced from docking and hydrogen-bonding analysis. Conclusions: According to the results, among the drugs mentioned, saquinavir and lopinavir showed the highest inhibitory potency against all three proteins compared to other drugs. It may enter laboratory phase studies as a dual-drug treatment to inhibit SARS-CoV-2.Keywords: covid-19, drug repositioning, molecular docking, lopinavir, saquinavir
Procedia PDF Downloads 88448 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area
Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos
Abstract:
We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.Keywords: computational fluid dynamics, extreme events, loading, tsunami
Procedia PDF Downloads 115447 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System
Authors: Saeed Poormoaied, Zumbul Atan
Abstract:
Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.
Procedia PDF Downloads 141446 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 97445 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 497