Search results for: Finite Volume Method (FVM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21378

Search results for: Finite Volume Method (FVM)

19398 Efficiency of Wood Vinegar Mixed with Some Plants Extract against the Housefly (Musca domestica L.)

Authors: U. Pangnakorn, S. Kanlaya

Abstract:

The efficiency of wood vinegar mixed with each individual of three plants extract such as: citronella grass (Cymbopogon nardus), neem seed (Azadirachta indica A. Juss), and yam bean seed (Pachyrhizus erosus Urb.) were tested against the second instar larvae of housefly (Musca domestica L.). Steam distillation was used for extraction of the citronella grass while neem and yam bean were simple extracted by fermentation with ethyl alcohol. Toxicity test was evaluated in laboratory based on two methods of larvicidal bioassay: topical application method (contact poison) and feeding method (stomach poison). Larval mortality was observed daily and larval survivability was recorded until the survived larvae developed to pupae and adults. The study resulted that treatment of wood vinegar mixed with citronella grass showed the highest larval mortality by topical application method (50.0%) and by feeding method (80.0%). However, treatment of mixed wood vinegar and neem seed showed the longest pupal duration to 25 day and 32 days for topical application method and feeding method respectively. Additional, larval duration on treated M. domestica larvae was extended to 13 days for topical application method and 11 days for feeding method. Thus, the feeding method gave higher efficiency compared with the topical application method.

Keywords: housefly (Musca domestica L.), neem seed (Azadirachta indica), citronella grass (Cymbopogon nardus), yam bean seed (Pachyrhizus erosus), mortality

Procedia PDF Downloads 330
19397 Analysis of Hard Turning Process of AISI D3-Thermal Aspects

Authors: B. Varaprasad, C. Srinivasa Rao

Abstract:

In the manufacturing sector, hard turning has emerged as vital machining process for cutting hardened steels. Besides many advantages of hard turning operation, one has to implement to achieve close tolerances in terms of surface finish, high product quality, reduced machining time, low operating cost and environmentally friendly characteristics. In the present study, three-dimensional CAE (Computer Aided Engineering) based simulation of  hard turning by using commercial software DEFORM 3D has been compared to experimental results of  stresses, temperatures and tool forces in machining of AISI D3 steel using mixed Ceramic inserts (CC6050). In the present analysis, orthogonal cutting models are proposed, considering several processing parameters such as cutting speed, feed, and depth of cut. An exhaustive friction modeling at the tool-work interfaces is carried out. Work material flow around the cutting edge is carefully modeled with adaptive re-meshing simulation capability. In process simulations, feed rate and cutting speed are constant (i.e.,. 0.075 mm/rev and 155 m/min), and analysis is focused on stresses, forces, and temperatures during machining. Close agreement is observed between CAE simulation and experimental values.

Keywords: hard turning, computer aided engineering, computational machining, finite element method

Procedia PDF Downloads 442
19396 Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.

Keywords: probability-based damage detection (PBDD), Kriging, surrogate modeling, uncertainty quantification, artificial intelligence, enhanced ideal gas molecular movement (EIGMM)

Procedia PDF Downloads 225
19395 A Family of Second Derivative Methods for Numerical Integration of Stiff Initial Value Problems in Ordinary Differential Equations

Authors: Luke Ukpebor, C. E. Abhulimen

Abstract:

Stiff initial value problems in ordinary differential equations are problems for which a typical solution is rapidly decaying exponentially, and their numerical investigations are very tedious. Conventional numerical integration solvers cannot cope effectively with stiff problems as they lack adequate stability characteristics. In this article, we developed a new family of four-step second derivative exponentially fitted method of order six for the numerical integration of stiff initial value problem of general first order differential equations. In deriving our method, we employed the idea of breaking down the general multi-derivative multistep method into predator and corrector schemes which possess free parameters that allow for automatic fitting into exponential functions. The stability analysis of the method was discussed and the method was implemented with numerical examples. The result shows that the method is A-stable and competes favorably with existing methods in terms of efficiency and accuracy.

Keywords: A-stable, exponentially fitted, four step, predator-corrector, second derivative, stiff initial value problems

Procedia PDF Downloads 242
19394 A New Method to Reduce 5G Application Layer Payload Size

Authors: Gui Yang Wu, Bo Wang, Xin Wang

Abstract:

Nowadays, 5G service-based interface architecture uses text-based payload like JSON to transfer business data between network functions, which has obvious advantages as internet services but causes unnecessarily larger traffic. In this paper, a new 5G application payload size reduction method is presented to provides the mechanism to negotiate about new capability between network functions when network communication starts up and how 5G application data are reduced according to negotiated information with peer network function. Without losing the advantages of 5G text-based payload, this method demonstrates an excellent result on application payload size reduction and does not increase the usage quota of computing resource. Implementation of this method does not impact any standards or specifications and not change any encoding or decoding functionality too. In a real 5G network, this method will contribute to network efficiency and eventually save considerable computing resources.

Keywords: 5G, JSON, payload size, service-based interface

Procedia PDF Downloads 160
19393 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 71
19392 Determination of Starting Design Parameters for Reactive-Dividing Wall Distillation Column Simulation Using a Modified Shortcut Design Method

Authors: Anthony P. Anies, Jose C. Muñoz

Abstract:

A new shortcut method for the design of reactive-dividing wall columns (RDWC) is proposed in this work. The RDWC is decomposed into its thermodynamically equivalent configuration naming the Petlyuk column, which consists of a reactive prefractionator and an unreactive main fractionator. The modified FUGK(Fenske-Underwood-Gilliland-Kirkbride) shortcut distillation method, which incorporates the effect of reaction on the Underwood equations and the Gilliland correlation, is used to design the reactive prefractionator. On the other hand, the conventional FUGK shortcut method is used to design the unreactive main fractionator. The shortcut method is applied to the synthesis of dimethyl ether (DME) through the liquid phase dehydration of methanol, and the results were used as the starting design inputs for rigorous simulation in Aspen Plus V8.8. A mole purity of 99 DME in the distillate stream, 99% methanol in the side draw stream, and 99% water in the bottoms stream were obtained in the simulation, thereby making the proposed shortcut method applicable for the preliminary design of RDWC.

Keywords: aspen plus, dimethyl ether, petlyuk column, reactive-dividing wall column, shortcut method, FUGK

Procedia PDF Downloads 174
19391 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 209
19390 Application of Optical Method Based on Laser Devise as Non-Destructive Testing for Calculus of Mechanical Deformation

Authors: R. Daïra, V. Chalvidan

Abstract:

We present the speckle interferometry method to determine the deformation of a piece. This method of holographic imaging using a CCD camera for simultaneous digital recording of two states object and reference. The reconstruction is obtained numerically. This latest method has the advantage of being simpler than the methods currently available, and it does not suffer the holographic configuration faults online. Furthermore, it is entirely digital and avoids heavy analysis after recording the hologram. This work was carried out in the laboratory HOLO 3 (optical metrology laboratory in Saint Louis, France) and it consists in controlling qualitatively and quantitatively the deformation of object by using a camera CCD connected to a computer equipped with software of Fringe Analysis.

Keywords: speckle, nondestructive testing, interferometry, image processing

Procedia PDF Downloads 484
19389 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach

Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani

Abstract:

This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.

Keywords: chaotic approach, phase space, Cao method, local linear approximation method

Procedia PDF Downloads 316
19388 Tumor Detection of Cerebral MRI by Multifractal Analysis

Authors: S. Oudjemia, F. Alim, S. Seddiki

Abstract:

This paper shows the application of multifractal analysis for additional help in cancer diagnosis. The medical image processing is a very important discipline in which many existing methods are in search of solutions to real problems of medicine. In this work, we present results of multifractal analysis of brain MRI images. The purpose of this analysis was to separate between healthy and cancerous tissue of the brain. A nonlinear method based on multifractal detrending moving average (MFDMA) which is a generalization of the detrending fluctuations analysis (DFA) is used for the detection of abnormalities in these images. The proposed method could make separation of the two types of brain tissue with success. It is very important to note that the choice of this non-linear method is due to the complexity and irregularity of tumor tissue that linear and classical nonlinear methods seem difficult to characterize completely. In order to show the performance of this method, we compared its results with those of the conventional method box-counting.

Keywords: irregularity, nonlinearity, MRI brain images, multifractal analysis, brain tumor

Procedia PDF Downloads 430
19387 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 58
19386 Joule Self-Heating Effects and Controlling Oxygen Vacancy in La₀.₈Ba₀.₂MnO₃ Ultrathin Films with Nano-Sized Labyrinth Morphology

Authors: Guankai Lin, Wei Tong, Hong Zhu

Abstract:

The electric current induced Joule heating effects have been investigated in La₀.₈Ba₀.₂MnO₃ ultrathin films deposited on LaAlO₃(001) single crystal substrate with smaller lattice constant by using the sol-gel method. By applying moderate bias currents (~ 10 mA), it is found that Joule self-heating simply gives rise to a temperature deviation between the thermostat and the test sample, but the intrinsic ρ(T) relationship measured at a low current (0.1 mA) changes little. However, it is noteworthy that the low-temperature transport behavior degrades from metallic to insulating state after applying higher bias currents ( > 31 mA) in a vacuum. Furthermore, metallic transport can be recovered by placing the degraded film in air. The results clearly suggest that the oxygen vacancy in the La₀.₈Ba₀.₂MnO₃ films is controllable in different atmospheres, particularly with the aid of the Joule self-heating. According to the SEM images, we attribute the controlled oxygen vacancy to the nano-sized labyrinth pattern of the films, where the large surface-to-volume ratio plays a curial role.

Keywords: controlling oxygen vacancy, joule self-heating, manganite, sol-gel method

Procedia PDF Downloads 143
19385 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds

Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang

Abstract:

Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.

Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision

Procedia PDF Downloads 147
19384 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 292
19383 Spectrophotometric Determination of Phenylephrine Hydrochloride by Coupling with Diazotized 2,4-Dinitroaniline

Authors: Sulaiman Gafar Muhamad

Abstract:

A rapid spectrophotometric method for the micro-determination of phenylephrine-HCl (PHE) has been developed. The proposed method involves the coupling of phenylephrine-HCl with diazotized 2,4-dinitroaniline in alkaline medium at λmax 455 nm. Under the present optimum condition, Beer’s law was obeyed in the range of 1.0-20 μg/ml of PHE with molar absorptivity of 1.915 ×104 l. mol-1.cm-1, with a relative error of 0.015 and a relative standard deviation of 0.024%. The current method has been applied successfully to estimate phenylephrine-HCl in pharmaceutical preparations (nose drop and syrup).

Keywords: diazo-coupling, 2, 4-dinitroaniline, phenylephrine-HCl, spectrophotometry

Procedia PDF Downloads 245
19382 Rational Probabilistic Method for Calculating Thermal Cracking Risk of Mass Concrete Structures

Authors: Naoyuki Sugihashi, Toshiharu Kishi

Abstract:

The probability of occurrence of thermal cracks in mass concrete in Japan is evaluated by the cracking probability diagram that represents the relationship between the thermal cracking index and the probability of occurrence of cracks in the actual structure. In this paper, we propose a method to directly calculate the cracking probability, following a probabilistic theory by modeling the variance of tensile stress and tensile strength. In this method, the relationship between the variance of tensile stress and tensile strength, the thermal cracking index, and the cracking probability are formulated and presented. In addition, standard deviation of tensile stress and tensile strength was identified, and the method of calculating cracking probability in a general construction controlled environment was also demonstrated.

Keywords: thermal crack control, mass concrete, thermal cracking probability, durability of concrete, calculating method of cracking probability

Procedia PDF Downloads 325
19381 A New Family of Integration Methods for Nonlinear Dynamic Analysis

Authors: Shuenn-Yih Chang, Chiu-LI Huang, Ngoc-Cuong Tran

Abstract:

A new family of structure-dependent integration methods, whose coefficients of the difference equation for displacement increment are functions of the initial structural properties and the step size for time integration, is proposed in this work. This family method can simultaneously integrate the controllable numerical dissipation, explicit formulation and unconditional stability together. In general, its numerical dissipation can be continuously controlled by a parameter and it is possible to achieve zero damping. In addition, it can have high-frequency damping to suppress or even remove the spurious oscillations high frequency modes. Whereas, the low frequency modes can be very accurately integrated due to the almost zero damping for these low frequency modes. It is shown herein that the proposed family method can have exactly the same numerical properties as those of HHT-α method for linear elastic systems. In addition, it still preserves the most important property of a structure-dependent integration method, which is an explicit formulation for each time step. Consequently, it can save a huge computational efforts in solving inertial problems when compared to the HHT-α method. In fact, it is revealed by numerical experiments that the CPU time consumed by the proposed family method is only about 1.6% of that consumed by the HHT-α method for the 125-DOF system while it reduces to be 0.16% for the 1000-DOF system. Apparently, the saving of computational efforts is very significant.

Keywords: structure-dependent integration method, nonlinear dynamic analysis, unconditional stability, numerical dissipation, accuracy

Procedia PDF Downloads 626
19380 On Periodic Integer-Valued Moving Average Models

Authors: Aries Nawel, Bentarzi Mohamed

Abstract:

This paper deals with the study of some probabilistic and statistical properties of a Periodic Integer-Valued Moving Average Model (PINMA_{S}(q)). The closed forms of the mean, the second moment and the periodic autocovariance function are obtained. Furthermore, the time reversibility of the model is discussed in details. Moreover, the estimation of the underlying parameters are obtained by the Yule-Walker method, the Conditional Least Square method (CLS) and the Weighted Conditional Least Square method (WCLS). A simulation study is carried out to evaluate the performance of the estimation method. Moreover, an application on real data set is provided.

Keywords: periodic integer-valued moving average, periodically correlated process, time reversibility, count data

Procedia PDF Downloads 179
19379 A Physical Treatment Method as a Prevention Method for Barium Sulfate Scaling

Authors: M. A. Salman, G. Al-Nuwaibit, M. Safar, M. Rughaibi, A. Al-Mesri

Abstract:

Barium sulfate (BaSO₄) is a hard scaling usually precipitates on the surface of equipment in many industrial systems, as oil and gas production, desalination and cooling and boiler operation. It is a scale that extremely resistance to both chemical and mechanical cleaning. So, BaSO₄ is a problematic and expensive scaling. Although barium ions are present in most natural waters at a very low concentration as low as 0.008 mg/l, it could result of scaling problems in the presence of high concentration of sulfate ion or when mixing with incompatible waters as in oil produced water. The scaling potential of BaSO₄ using seawater at the intake of seven desalination plants in Kuwait, brine water and Kuwait oil produced water was calculated and compared then the best location in regards of barium sulfate scaling was reported. Finally, a physical treatment method (magnetic treatment method) and chemical treatment method were used to control BaSO₄ scaling using saturated solutions at different operating temperatures, flow velocities, feed pHs and different magnetic strengths. The results of the two methods were discussed, and the more economical one with the reasonable performance was recommended, which is the physical treatment method.

Keywords: magnetic field strength, flow velocity, retention time, barium sulfate

Procedia PDF Downloads 260
19378 The Influence of Steel Connection on Fire Resistance of Composite Steel-Framed Buildings

Authors: Mohammed Kadhim, Zhaohui Huang

Abstract:

Steel connections can play an important role in enhancing the robustness of structures under fire conditions. Therefore, it is significant to examine the influence of steel connections on the fire resistance of composite steel-framed buildings. In this paper, both the behavior of steel connections and their influence on composite steel frame are analyzed using the non-linear finite element computer software VULCAN at ambient and elevated temperatures. The chosen frame is subjected to ISO834 fire. The comparison between end plate connections, pinned connection, and rigid connection has been carried out. By applying different compartment fires, some cases are studied to show the behavior of steel connection when the fire is applied at certain beams. In addition, different plate thickness and deferent applied loads have been analyzed to examine the behavior of chosen steel connection under ISO834 fire. It was found from the analytical results that the beam with extended end plate is stronger and has better performance in terms of axial forces than those beams with flush end plate connection. It was also found that extended end plate connection has highest limiting temperatures compared to the flush end plate connection. In addition, it was found that the performance of end-plate connections is very close to rigid connection and very far from pinned connections. Furthermore, plate thickness has less effect on the influence of steel connection on fire resistance. In conclusion, the behavior of composite steel framed buildings is largely dependent on the steel connection due to their high impact under fire condition. It is recommended to consider the extended end-plate in the design proposes because of its higher properties compared to the flush end plate connection. Finally, this paper shows a steel connection has an important effect on the fire resistance of composite steel framed buildings.

Keywords: composite steel-framed buildings, connection behavior, end-plate connections, finite element modeling, fire resistance

Procedia PDF Downloads 147
19377 Heat and Mass Transfer Modelling of Industrial Sludge Drying at Different Pressures and Temperatures

Authors: L. Al Ahmad, C. Latrille, D. Hainos, D. Blanc, M. Clausse

Abstract:

A two-dimensional finite volume axisymmetric model is developed to predict the simultaneous heat and mass transfers during the drying of industrial sludge. The simulations were run using COMSOL-Multiphysics 3.5a. The input parameters of the numerical model were acquired from a preliminary experimental work. Results permit to establish correlations describing the evolution of the various parameters as a function of the drying temperature and the sludge water content. The selection and coupling of the equation are validated based on the drying kinetics acquired experimentally at a temperature range of 45-65 °C and absolute pressure range of 200-1000 mbar. The model, incorporating the heat and mass transfer mechanisms at different operating conditions, shows simulated values of temperature and water content. Simulated results are found concordant with the experimental values, only at the first and last drying stages where sludge shrinkage is insignificant. Simulated and experimental results show that sludge drying is favored at high temperatures and low pressure. As experimentally observed, the drying time is reduced by 68% for drying at 65 °C compared to 45 °C under 1 atm. At 65 °C, a 200-mbar absolute pressure vacuum leads to an additional reduction in drying time estimated by 61%. However, the drying rate is underestimated in the intermediate stage. This rate underestimation could be improved in the model by considering the shrinkage phenomena that occurs during sludge drying.

Keywords: industrial sludge drying, heat transfer, mass transfer, mathematical modelling

Procedia PDF Downloads 114
19376 Estimation and Validation of Free Lime Analysis of Clinker by Quantitative Phase Analysis Using X ray diffraction

Authors: Suresh Palla, Kalpna Sharma, Gaurav Bhatnagar, S. K. Chaturvedi, B. N. Mohapatra

Abstract:

Determining the content of free lime is especially important to judge reactivity of the raw materials and clinker quality. The free lime limit isn’t the same for all cements; it depends on several factors, especially the temperature reached during the cooking and the grain size distribution in cement after grinding. Estimation of free lime by conventional method is influenced by the presence of portlandite and misleads the actual free lime content in the clinker for quality check up conditions. To ensure the product quality according to the standard specifications in terms of within the quality limits or not, a reliable, precise, and very reproducible method to quantify the relative phase abundances in the Portland Cement clinker and Portland Cements is to use X-ray diffraction (XRD) in combination with the Rietveld method. In the present study, a methodology was proposed using XRD to validate the obtained results of free lime by conventional method. The XRD and TG/DTA results confirm the presence of portlandite in the clinker to take the decision on the obtained free lime results through conventional method.

Keywords: free lime, quantitative phase analysis, conventional method, x ray diffraction

Procedia PDF Downloads 121
19375 Fabrication of Hollow Germanium Spheres by Dropping Method

Authors: Kunal D. Bhagat, Truong V. Vu, John C. Wells, Hideyuki Takakura, Yu Kawano, Fumio Ogawa

Abstract:

Hollow germanium alloy quasi-spheres of diameters 1 to 2 mm with a relatively smooth inner and outer surface have been produced. The germanium was first melted at around 1273 K and then exuded from a coaxial nozzle into an inert atmosphere by argon gas supplied to the inner nozzle. The falling spheres were cooled by water spray and collected in a bucket. The spheres had a horn type of structure on the outer surface, which might be caused by volume expansion induced by the density difference between solid and gas phase. The frequency of the sphere formation was determined from the videos to be about 133 Hz. The outer diameter varied in the range of 1.3 to 1.8 mm with a wall thickness in the range of 0.2 to 0.5 mm. Solid silicon spheres are used for spherical silicon solar cells (S₃CS), which have various attractive features. Hollow S₃CS promise substantially higher energy conversion efficiency if their wall thickness can be kept to 0.1–0.2 mm and the inner surface can be passivated. Our production of hollow germanium spheres is a significant step towards the production of hollow S₃CS with, we hope, higher efficiency and lower material cost than solid S₃CS.

Keywords: hollow spheres, semiconductor, compound jet, dropping method

Procedia PDF Downloads 193
19374 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature

Authors: Chin-Yun Chen

Abstract:

Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.

Keywords: numerical quadrature, error estimation, derivative free method, interval computation

Procedia PDF Downloads 448
19373 Model of Cosserat Continuum Dispersion in a Half-Space with a Scatterer

Authors: Francisco Velez, Juan David Gomez

Abstract:

Dispersion effects on the Scattering for a semicircular canyon in a micropolar continuum are analyzed, by using a computational finite element scheme. The presence of microrotational waves and the dispersive SV waves affects the propagation of elastic waves. Here, a contrast with the classic model is presented, and the dependence with the micropolar parameters is studied.

Keywords: scattering, semicircular canyon, wave dispersion, micropolar medium, FEM modeling

Procedia PDF Downloads 532
19372 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 211
19371 Sound Performance of a Composite Acoustic Coating With Embedded Parallel Plates Under Hydrostatic Pressure

Authors: Bo Hu, Shibo Wang, Haoyang Zhang, Jie Shi

Abstract:

With the development of sonar detection technology, the acoustic stealth technology of underwater vehicles is facing severe challenges. The underwater acoustic coating is developing towards the direction of low-frequency absorption capability and broad absorption frequency bandwidth. In this paper, an acoustic model of underwater acoustic coating of composite material embedded with periodical steel structure is presented. The model has multiple high absorption peaks in the frequency range of 1kHz-8kHz, where achieves high sound absorption and broad bandwidth performance. It is found that the frequencies of the absorption peaks are related to the classic half-wavelength transmission principle. The sound absorption performance of the acoustic model is investigated by the finite element method using COMSOL software. The sound absorption mechanism of the proposed model is explained by the distributions of the displacement vector field. The influence of geometric parameters of periodical steel structure, including thickness and distance, on the sound absorption ability of the proposed model are further discussed. The acoustic model proposed in this study provides an idea for the design of underwater low-frequency broadband acoustic coating, and the results shows the possibility and feasibility for practical underwater application.

Keywords: acoustic coating, composite material, broad frequency bandwidth, sound absorption performance

Procedia PDF Downloads 157
19370 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology

Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum

Abstract:

According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.

Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics

Procedia PDF Downloads 311
19369 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution

Procedia PDF Downloads 318