Search results for: volume of fluid method
19732 A Method for Quantitative Assessment of the Dependencies between Input Signals and Output Indicators in Production Systems
Authors: Maciej Zaręba, Sławomir Lasota
Abstract:
Knowing the degree of dependencies between the sets of input signals and selected sets of indicators that measure a production system's effectiveness is of great importance in the industry. This paper introduces the SELM method that enables the selection of sets of input signals, which affects the most the selected subset of indicators that measures the effectiveness of a production system. For defined set of output indicators, the method quantifies the impact of input signals that are gathered in the continuous monitoring production system.Keywords: manufacturing operation management, signal relationship, continuous monitoring, production systems
Procedia PDF Downloads 11719731 Numerical Solution of Porous Media Equation Using Jacobi Operational Matrix
Authors: Shubham Jaiswal
Abstract:
During modeling of transport phenomena in porous media, many nonlinear partial differential equations (NPDEs) encountered which greatly described the convection, diffusion and reaction process. To solve such types of nonlinear problems, a reliable and efficient technique is needed. In this article, the numerical solution of NPDEs encountered in porous media is derived. Here Jacobi collocation method is used to solve the considered problems which convert the NPDEs in systems of nonlinear algebraic equations that can be solved using Newton-Raphson method. The numerical results of some illustrative examples are reported to show the efficiency and high accuracy of the proposed approach. The comparison of the numerical results with the existing analytical results already reported in the literature and the error analysis for each example exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.Keywords: nonlinear porous media equation, shifted Jacobi polynomials, operational matrix, spectral collocation method
Procedia PDF Downloads 43719730 Spectrophotometric Determination of Photohydroxylated Products of Humic Acid in the Presence of Salicylate Probe
Authors: Julide Hizal Yucesoy, Batuhan Yardimci, Aysem Arda, Resat Apak
Abstract:
Humic substances produce reactive oxygene species such as hydroxyl, phenoxy and superoxide radicals by oxidizing in a wide pH and reduction potential range. Hydroxyl radicals, produced by reducing agents such as antioxidants and/or peroxides, attack on salicylate probe, and form 2,3-dihydroxybenzoate, 2,4-dihydroxybenzoate and 2,5-dihydroxybenzoate species. These species are quantitatively determined by using HPLC Method. Humic substances undergo photodegradation by UV radiation. As a result of their antioxidant properties, they produce hydroxyl radicals. In the presence of salicylate probe, these hydroxyl radicals react with salicylate molecules to form hydroxylated products (dihidroxybenzoate isomers). In this study, humic acid was photodegraded in a photoreactor at 254 nm (400W), formed hydroxyl radicals were caught by salicylate probe. The total concentration of hydroxylated salicylate species was measured by using spectrophotometric CUPRAC Method. And also, using results of time dependent experiments, kinetic of photohydroxylation was determined at different pHs. This method has been applied for the first time to measure the concentration of hydroxylated products. It allows to achieve the results easier than HPLC Method.Keywords: CUPRAC method, humic acid, photohydroxylation, salicylate probe
Procedia PDF Downloads 20619729 Suitable Die Shaping for a Rectangular Shape Bottle by Application of FEM and AI Technique
Authors: N. Ploysook, R. Rugsaj, C. Suvanjumrat
Abstract:
The characteristic requirement for producing rectangular shape bottles was a uniform thickness of the plastic bottle wall. Die shaping was a good technique which controlled the wall thickness of bottles. An advance technology which was the finite element method (FEM) for blowing parison to be a rectangular shape bottle was conducted to reduce waste plastic from a trial and error method of a die shaping and parison control method. The artificial intelligent (AI) comprised of artificial neural network and genetic algorithm was selected to optimize the die gap shape from the FEM results. The application of AI technique could optimize the suitable die gap shape for the parison blow molding which did not depend on the parison control method to produce rectangular bottles with the uniform wall. Particularly, this application can be used with cheap blow molding machines without a parison controller therefore it will reduce cost of production in the bottle blow molding process.Keywords: AI, bottle, die shaping, FEM
Procedia PDF Downloads 23619728 A Theoretical Approach of Tesla Pump
Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu
Abstract:
This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.Keywords: lubrication, temperature, tesla-pump, viscosity
Procedia PDF Downloads 17819727 Closed Form Solution for 4-D Potential Integrals for Arbitrary Coplanar Polygonal Surfaces
Authors: Damir Latypov
Abstract:
A closed-form solution for 4-D double surface integrals arising in boundary integrals equations of a potential theory is obtained for arbitrary coplanar polygonal surfaces. The solution method is based on the construction of exact differential forms followed by the application of Stokes' theorem for each surface integral. As a result, the 4-D double surface integral is reduced to a 2-D double line integral. By an appropriate change of variables, the integrand is transformed into a separable function of integration variables. The closed-form solutions to the corresponding 1-D integrals are readily available in the integration tables. Previously closed-form solutions were known only for the case of coincident triangle surfaces and coplanar rectangles. Solutions for these cases were obtained by surface-specific ad-hoc methods, while the present method is general. The method also works for non-polygonal surfaces. As an example, we compute in closed form the 4-D integral for the case of coincident surfaces in the shape of a circular disk. For an arbitrarily shaped surface, the proposed method provides an efficient quadrature rule. Extensions of the method for non-coplanar surfaces and other than 1/R integral kernels are also discussed.Keywords: boundary integral equations, differential forms, integration, stokes' theorem
Procedia PDF Downloads 30919726 A Strategy of Direct Power Control for PWM Rectifier Reducing Ripple in Instantaneous Power
Authors: T. Mohammed Chikouche, K. Hartani
Abstract:
In order to solve the instantaneous power ripple and achieve better performance of direct power control (DPC) for a three-phase PWM rectifier, a control method is proposed in this paper. This control method is applied to overcome the instantaneous power ripple, to eliminate line current harmonics and therefore reduce the total harmonic distortion and to improve the power factor. A switching table is based on the analysis on the change of instantaneous active and reactive power, to select the optimum switching state of the three-phase PWM rectifier. The simulation result shows feasibility of this control method.Keywords: power quality, direct power control, power ripple, switching table, unity power factor
Procedia PDF Downloads 31919725 The Cut-Off Value of TG/HDL Ratio of High Pericardial Adipose Tissue
Authors: Nam-Seok Joo, Da-Eun Jung, Beom-Hee Choi
Abstract:
Background and Objectives: Recently, the triglyceride/high-density lipoprotine cholesterol (TG/HDL) ratio and pericardial adipose tissue (PAT) has gained attention as an indicator related to metabolic syndrome (MS). To date, there has been no research on the relationship between TG/HDL and PAT, we aimed to investigate the association between the TG/HDL and PAT. Methods: In this cross-sectional study, we investigated 627 patients who underwent coronary multidetector computed tomography and metabolic parameters. We divided subjects into two groups according to the cut-off PAT volume associated with MS, which is 142.2 cm³, and we compared metabolic parameters between those groups. We divided the TG/HDL ratio into tertiles according to Log(TG/HDL) and compared PAT-related parameters by analysis of variance. Finally, we applied logistic regression analysis to obtain the odds ratio of high PAT (PAT volume≥142.2 cm³) in each tertile, and we performed receiver operating characteristic (ROC) analysis to get the cut-off of TG/HDL ratio according to high PAT. Results: The mean TG/ HDL ratio of the high PAT volume group was 3.6, and TG/ HDL ratio had a strong positive correlation with various metabolic parameters. In addition, in the Log (TG/HDL) tertile groups, the higher tertile had more metabolic derangements, including PAT, and showed higher odds ratios of having high PAT (OR=4.10 in the second tertile group and OR=5.06 in their third tertile group, respectively) after age, sex, smoking adjustments. TG/HDL ratio according to the having increased PAT by ROC curve showed 1.918 (p < 0.001). Conclusion: TG/HDL ratio and high PAT volume have a significant positive correlation, and higher TG/HDL ratio showed high PAT. The cut-off value of the TG/HDL ratio was 1.918 to have a high PAT.Keywords: triglyceride, high-density lipoprotein, pericardial adipose tissue, cut-off value
Procedia PDF Downloads 1319724 Modification Encryption Time and Permutation in Advanced Encryption Standard Algorithm
Authors: Dalal N. Hammod, Ekhlas K. Gbashi
Abstract:
Today, cryptography is used in many applications to achieve high security in data transmission and in real-time communications. AES has long gained global acceptance and is used for securing sensitive data in various industries but has suffered from slow processing and take a large time to transfer data. This paper suggests a method to enhance Advance Encryption Standard (AES) Algorithm based on time and permutation. The suggested method (MAES) is based on modifying the SubByte and ShiftRrows in the encryption part and modification the InvSubByte and InvShiftRows in the decryption part. After the implementation of the proposal and testing the results, the Modified AES achieved good results in accomplishing the communication with high performance criteria in terms of randomness, encryption time, storage space, and avalanche effects. The proposed method has good randomness to ciphertext because this method passed NIST statistical tests against attacks; also, (MAES) reduced the encryption time by (10 %) than the time of the original AES; therefore, the modified AES is faster than the original AES. Also, the proposed method showed good results in memory utilization where the value is (54.36) for the MAES, but the value for the original AES is (66.23). Also, the avalanche effects used for calculating diffusion property are (52.08%) for the modified AES and (51.82%) percentage for the original AES.Keywords: modified AES, randomness test, encryption time, avalanche effects
Procedia PDF Downloads 24519723 Influence of Magnetic Field on Microstructure and Properties of Copper-Silver Composites
Authors: Engang Wang
Abstract:
The Cu-alloy composites are a kind of high-strength and high-conductivity Cu-based alloys, which have excellent mechanical and electrical properties and is widely used in electronic, electrical, machinery industrial fields. However, the solidification microstructure of the composites, such as the primary or second dendrite arm spacing, have important rule to its tensile strength and conductivity, and that is affected by its fabricating method. In this paper, two kinds of directional solidification methods; the exothermic powder method (EP method) and liquid metal cooling method (LMC method), were used to fabricate the Cu-alloy composites with applied different magnetic fields to investigate their influence on the solidifying microstructure of Cu-alloy, and further the fabricated Cu-alloy composites was drawn to wires to investigate the influence of fabricating method and magnetic fields on the drawing microstructure of fiber-reinforced Cu-alloy composites and its properties. The experiment of Cu-Ag alloy under directional solidification and horizontal magnetic fields with different processing parameters show that: 1) For the Cu-Ag alloy with EP method, the dendrite is directionally developed in the cooling copper mould and the solidifying microstructure is effectively refined by applying horizontal magnetic fields. 2) For the Cu-Ag alloy with LMC method, the primary dendrite arm spacing is decreased and the content of Ag in the dendrite increases as increasing the drawing velocity of solidification. 3) The dendrite is refined and the content of Ag in the dendrite increases as increasing the magnetic flux intensity; meanwhile, the growth direction of dendrite is also affected by magnetic field. The research results of Cu-Ag alloy in situ composites by drawing deforming process show that the micro-hardness of alloy is higher by decreasing dendrite arm spacing. When the dendrite growth orientation is consistent with the axial of the samples. the conductivity of the composites increases with the second dendrite arm spacing increases. However, its conductivity reduces with the applied magnetic fields owing to disrupting the dendrite growth orientation.Keywords: Cu-Ag composite, magnetic field, microstructure, solidification
Procedia PDF Downloads 21219722 A Study of Achievement and Attitude on Learning Science in English by Using Co – Teaching Method
Authors: Sakchai Rachniyom
Abstract:
Owing to the ASEAN community will formally take place in the few months; therefore, Thais should realize about the importance of English language. Since, it is regarded as a working language in the community. To promote Science students’ English proficiency, teacher should be able to teach in English language appropriately and effectively. The purposes of the quasi – experimental research are (1) to measure the learning achievement, (2) to evaluate students’ satisfaction on the teaching and learning and (3) to study the consequences of co – teaching method in order comprehend the learning achievement and improvement. The participants were 40 general science students teacher. Two types of research instruments were included; (1) an achievement test, and (2) a questionnaire. This research was conducted for 1 semester. The statistics used in this research were arithmetic mean and standard deviation. The findings of the study revealed that students’ achievement score was significantly increased at statistical level .05 and the students satisfied the teaching and learning at the highest level . The students’ involvement and teachers’ support were promoted. It was also reported students’ learning was improved by co – teaching method.Keywords: co – teaching method, learning science in english, teacher, education
Procedia PDF Downloads 47719721 Long Term Examination of the Profitability Estimation Focused on Benefits
Authors: Stephan Printz, Kristina Lahl, René Vossen, Sabina Jeschke
Abstract:
Strategic investment decisions are characterized by high innovation potential and long-term effects on the competitiveness of enterprises. Due to the uncertainty and risks involved in this complex decision making process, the need arises for well-structured support activities. A method that considers cost and the long-term added value is the cost-benefit effectiveness estimation. One of those methods is the “profitability estimation focused on benefits – PEFB”-method developed at the Institute of Management Cybernetics at RWTH Aachen University. The method copes with the challenges associated with strategic investment decisions by integrating long-term non-monetary aspects whilst also mapping the chronological sequence of an investment within the organization’s target system. Thus, this method is characterized as a holistic approach for the evaluation of costs and benefits of an investment. This participation-oriented method was applied to business environments in many workshops. The results of the workshops are a library of more than 96 cost aspects, as well as 122 benefit aspects. These aspects are preprocessed and comparatively analyzed with regards to their alignment to a series of risk levels. For the first time, an accumulation and a distribution of cost and benefit aspects regarding their impact and probability of occurrence are given. The results give evidence that the PEFB-method combines precise measures of financial accounting with the incorporation of benefits. Finally, the results constitute the basics for using information technology and data science for decision support when applying within the PEFB-method.Keywords: cost-benefit analysis, multi-criteria decision, profitability estimation focused on benefits, risk and uncertainty analysis
Procedia PDF Downloads 44319720 Acoustics Barrier Design to Reduce Railway Noise by Using Maekawa's Method
Authors: Malinda Sabrina, Khoerul Anwar
Abstract:
Railway noise generated by pass-by train has been described as a form of environmental pollutants especially for the residential area near the railway. Many studies have shown, that environmental noise particularly transportation noise has negative effects on people which resulting in annoyance and specific health problems such as cardiovascular disease, cognitive impairment and sleep disturbance. Therefore, various attempts are made to reduce the noise. One method of reducing such noise to acceptable noise levels is to build acoustically barrier walls. The objective of this study was to review the method of reducing railway noise and obtain the preliminary design of the acoustics barrier on the edge of railway tracks close to the residential area. The design of this barrier is using the Maekawa's method. Measurements have been performed in residential areas around the railroads in the Karawang - Indonesia with the absence of an acoustical barrier. From the observation, it was found that the railway was passed by five trains within thirty minutes. With the limited distance between the railway tracks and the location of the residential area as well as the street of residents, then it was obtained that a reduction in sound pressure level is 25 dBA. Maximum sound pressure level obtained is 86.9 dBA then by setting the barrier as high as 4 m at a distance, 2.5 m from the railway, the noise level received by residents in the settlement around the railway line becomes 61.9 dBA.Keywords: acoustics barrier, Maekawa's method, noise attenuation, railway noise
Procedia PDF Downloads 19819719 Maximum Power Point Tracking Based on Estimated Power for PV Energy Conversion System
Authors: Zainab Almukhtar, Adel Merabet
Abstract:
In this paper, a method for maximum power point tracking of a photovoltaic energy conversion system is presented. This method is based on using the difference between the power from the solar panel and an estimated power value to control the DC-DC converter of the photovoltaic system. The difference is continuously compared with a preset error permitted value. If the power difference is more than the error, the estimated power is multiplied by a factor and the operation is repeated until the difference is less or equal to the threshold error. The difference in power will be used to trigger a DC-DC boost converter in order to raise the voltage to where the maximum power point is achieved. The proposed method was experimentally verified through a PV energy conversion system driven by the OPAL-RT real time controller. The method was tested on varying radiation conditions and load requirements, and the Photovoltaic Panel was operated at its maximum power in different conditions of irradiation.Keywords: control system, error, solar panel, MPPT tracking
Procedia PDF Downloads 28119718 Determining the Mode II Intra Ply Energy Release Rate of Composites Made of Prepreg
Authors: Philip Rose, Markus Linke, David Busquets
Abstract:
The distinction between interlaminar and intralaminar fracture toughness has already been investigated by several authors. For loading mode I, the double cantilever beam specimens were often used for the interlaminar fracture toughness and the compact tension specimen for the intralaminar fracture toughness. In order to minimize the influence of the different specimen geometries, a method was developed which allows the determination of both the interlaminar and the intralaminar fracture toughness on an almost identical specimen geometry. However, as this method is not applicable to prepreg semi-finished products, a further modification was developed, which is also suitable for prepreg laminates. After the successful application for the investigation of mode I with this method, the application of the method for loading mode II is presented in this paper. In addition to manufacturing differences, due to an additional fiber ply in which the controlled crack growth takes place, the adapted test procedure is also explained. By comparing the test results of standardized end-notched flexure (ENF) specimens with those of the modified ENF specimen, the difference between the interlaminar and intralaminar fracture toughness of the material Hexply 8552/IM7 is shown.Keywords: ENF, fracture toughness, interlaminar, mode II
Procedia PDF Downloads 13419717 A Hybrid Derivative-Free Optimization Method for Pass Schedule Calculation in Cold Rolling Mill
Authors: Mohammadhadi Mirmohammadi, Reza Safian, Hossein Haddad
Abstract:
This paper presents an innovative solution for complex multi-objective optimization problem which is a part of efforts toward maximizing rolling mill throughput and minimizing processing costs in tandem cold rolling. This computational intelligence based optimization has been applied to the rolling schedules of tandem cold rolling mill. This method involves the combination of two derivative-free optimization procedures in the form of nested loops. The first optimization loop is based on Improving Hit and Run method which focus on balance of power, force and reduction distribution in rolling schedules. The second loop is a real-coded genetic algorithm based optimization procedure which optimizes energy consumption and productivity. An experimental result of application to five stand tandem cold rolling mill is presented.Keywords: derivative-free optimization, Improving Hit and Run method, real-coded genetic algorithm, rolling schedules of tandem cold rolling mill
Procedia PDF Downloads 69519716 Software Verification of Systematic Resampling for Optimization of Particle Filters
Authors: Osiris Terry, Kenneth Hopkinson, Laura Humphrey
Abstract:
Systematic resampling is the most popularly used resampling method in particle filters. This paper seeks to further the understanding of systematic resampling by defining a formula made up of variables from the sampling equation and the particle weights. The formula is then verified via SPARK, a software verification language. The verified systematic resampling formula states that the minimum/maximum number of possible samples taken of a particle is equal to the floor/ceiling value of particle weight divided by the sampling interval, respectively. This allows for the creation of a randomness spectrum that each resampling method can fall within. Methods on the lower end, e.g., systematic resampling, have less randomness and, thus, are quicker to reach an estimate. Although lower randomness allows for error by having a larger bias towards the size of the weight, having this bias creates vulnerabilities to the noise in the environment, e.g., jamming. Conclusively, this is the first step in characterizing each resampling method. This will allow target-tracking engineers to pick the best resampling method for their environment instead of choosing the most popularly used one.Keywords: SPARK, software verification, resampling, systematic resampling, particle filter, tracking
Procedia PDF Downloads 8219715 Melnikov Analysis for the Chaos of the Nonlocal Nanobeam Resting on Fractional-Order Softening Nonlinear Viscoelastic Foundations
Authors: Guy Joseph Eyebe, Gambo Betchewe, Alidou Mohamadou, Timoleon Crepin Kofane
Abstract:
In the present study, the dynamics of nanobeam resting on fractional order softening nonlinear viscoelastic pasternack foundations is studied. The Hamilton principle is used to derive the nonlinear equation of the motion. Approximate analytical solution is obtained by applying the standard averaging method. The Melnikov method is used to investigate the chaotic behaviors of device, the critical curve separating the chaotic and non-chaotic regions are found. It is shown that appearance of chaos in the system depends strongly on the fractional order parameter.Keywords: chaos, fractional-order, Melnikov method, nanobeam
Procedia PDF Downloads 15919714 Convective Brinkman-Forchiemer Extended Flow through Channel Filled with Porous Material: An Approximate Analytical Approach
Authors: Basant K. Jha, M. L. Kaurangini
Abstract:
An approximate analytical solution is presented for convective flow in a horizontal channel filled with porous material. The Brinkman-Forchheimer extension of Darcy equation is utilized to model the fluid flow while the energy equation is utilized to model temperature distribution in the channel. The solutions were obtained utilizing the newly suggested technique and compared with those obtained from an implicit finite-difference solution.Keywords: approximate analytical, convective flow, porous material, Brinkman-Forchiemer
Procedia PDF Downloads 39419713 Ill-Posed Inverse Problems in Molecular Imaging
Authors: Ranadhir Roy
Abstract:
Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method
Procedia PDF Downloads 26919712 Numerical Implementation and Testing of Fractioning Estimator Method for the Box-Counting Dimension of Fractal Objects
Authors: Abraham Terán Salcedo, Didier Samayoa Ochoa
Abstract:
This work presents a numerical implementation of a method for estimating the box-counting dimension of self-avoiding curves on a planar space, fractal objects captured on digital images; this method is named fractioning estimator. Classical methods of digital image processing, such as noise filtering, contrast manipulation, and thresholding, among others, are used in order to obtain binary images that are suitable for performing the necessary computations of the fractioning estimator. A user interface is developed for performing the image processing operations and testing the fractioning estimator on different captured images of real-life fractal objects. To analyze the results, the estimations obtained through the fractioning estimator are compared to the results obtained through other methods that are already implemented on different available software for computing and estimating the box-counting dimension.Keywords: box-counting, digital image processing, fractal dimension, numerical method
Procedia PDF Downloads 8119711 Changes in Pulmonary Functions in Diabetes Mellitus Type 2
Authors: N. Anand, P. S. Nayyer, V. Rana, S. Verma
Abstract:
Background: Diabetes mellitus is a group of disorders characterized by hyperglycemia and associated with microvascular and macrovascular complications. Among the lesser known complications is the involvement of respiratory system. Changes in pulmonary volume, diffusion and elastic properties of lungs as well as the performance of the respiratory muscles lead to a restrictive pattern in lung functions. The present study was aimed to determine the changes in various parameters of pulmonary function tests amongst patients with Type 2 Diabetes Mellitus and also try to study the effect of duration of Diabetes Mellitus on pulmonary function tests. Methods: It was a cross sectional study performed at Dr Baba Saheb Ambedkar Hospital and Medical College in, Delhi, A Tertiary care referral centre which included 200 patients divided into 2 groups. The first group included diagnosed patients with diabetes and the second group included controls. Cases and controls symptomatic for any acute or chronic Respiratory or Cardiovascular illness or a history of smoking were excluded. Both the groups were subjected to spirometry to evaluate for the pulmonary function tests. Result: The mean Forced Vital Capacity (FVC), Forced Expiratory Volume in first second (FEV1), Peak Expiratory Flow Rate(PEFR) was found to be significantly decreased ((P < 0.001) as compared to controls while the mean ratio of Forced Expiratory Volume in First second to Forced Vital Capacity was not significantly decreased( p>0.005). There was no correlation seen with duration of the disease. Conclusion: Forced Vital Capacity (FVC), Forced Expiratory Volume in first second (FEV1), Peak Expiratory Flow Rate(PEFR) were found to be significantly decreased in patients of Diabetes mellitus while ratio of Forced Expiratory Volume in First second to Forced Vital Capacity (FEV1/FVC) was not significantly decreased. The duration of Diabetes mellitus was not found to have any statistically significant effect on Pulmonary function tests (p > 0.005).Keywords: diabetes mellitus, pulmonary function tests, forced vital capacity, forced expiratory volume in first second
Procedia PDF Downloads 36619710 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence
Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy
Abstract:
Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows
Procedia PDF Downloads 14419709 Preparation of Li Ion Conductive Ceramics via Liquid Process
Authors: M. Kotobuki, M. Koishi
Abstract:
Li1.5Al0.5Ti1.5 (PO4)3(LATP) has received much attention as a solid electrolyte for lithium batteries. In this study, the LATP solid electrolyte is prepared by the co-precipitation method using Li3PO4 as a Li source. The LATP is successfully prepared and the Li ion conductivities of bulk (inner crystal) and total (inner crystal and grain boundary) are 1.1 × 10-3 and 1.1 × 10-4 S cm-1, respectively. These values are comparable to the reported values, in which Li2C2O4 is used as the Li source. It is conclude that the LATP solid electrolyte can be prepared by the co-precipitation method using Li3PO4 as the Li source and this procedure has an advantage in mass production over previous procedure using Li2C2O4 because Li3PO4 is lower price reagent compared with Li2C2O4.Keywords: co-precipitation method, lithium battery, NASICON-type electrolyte, solid electrolyte
Procedia PDF Downloads 35019708 Non-Convex Multi Objective Economic Dispatch Using Ramp Rate Biogeography Based Optimization
Authors: Susanta Kumar Gachhayat, S. K. Dash
Abstract:
Multi objective non-convex economic dispatch problems of a thermal power plant are of grave concern for deciding the cost of generation and reduction of emission level for diminishing the global warming level for improving green-house effect. This paper deals with ramp rate constraints for achieving better inequality constraints so as to incorporate valve point loading for cost of generation in thermal power plant through ramp rate biogeography based optimization involving mutation and migration. Through 50 out of 100 trials, the cost function and emission objective function were found to have outperformed other classical methods such as lambda iteration method, quadratic programming method and many heuristic methods like particle swarm optimization method, weight improved particle swarm optimization method, constriction factor based particle swarm optimization method, moderate random particle swarm optimization method etc. Ramp rate biogeography based optimization applications prove quite advantageous in solving non convex multi objective economic dispatch problems subjected to nonlinear loads that pollute the source giving rise to third harmonic distortions and other such disturbances.Keywords: economic load dispatch, ELD, biogeography-based optimization, BBO, ramp rate biogeography-based optimization, RRBBO, valve-point loading, VPL
Procedia PDF Downloads 37719707 An Application of Graph Theory to The Electrical Circuit Using Matrix Method
Authors: Samai'la Abdullahi
Abstract:
A graph is a pair of two set and so that a graph is a pictorial representation of a system using two basic element nodes and edges. A node is represented by a circle (either hallo shade) and edge is represented by a line segment connecting two nodes together. In this paper, we present a circuit network in the concept of graph theory application and also circuit models of graph are represented in logical connection method were we formulate matrix method of adjacency and incidence of matrix and application of truth table.Keywords: euler circuit and path, graph representation of circuit networks, representation of graph models, representation of circuit network using logical truth table
Procedia PDF Downloads 56019706 Numerical Study of 5kW Vertical Axis Wind Turbine Using DOE Method
Authors: Yan-Ting Lin, Wei-Nian Su
Abstract:
The purpose of this paper is to demonstrate the design of 5kW vertical axis wind turbine (VAWT) using DOE method. The NACA0015 airfoil was implemented for the design and 3D simulation. The critical design parameters are chord length, tip speed ratio (TSR), aspect ratio (AR) and pitch angle in this investigation. The RNG k-ε turbulent model and the sliding mesh method are adopted in the CFD simulation. The results show that the model with zero pitch, 0.3 m in chord length, TSR of 3, and AR of 10 demonstrated the optimum aerodynamic power under the uniform 10m/s inlet velocity. The aerodynamic power is 3.61kW and 3.89kW under TSR of 3 and 4 respectively. The aerodynamic power decreased dramatically while TSR increased to 5.Keywords: vertical axis wind turbine, CFD, DOE, VAWT
Procedia PDF Downloads 43919705 Numerical and Experimental Investigations of Cantilever Rectangular Plate Structure on Subsonic Flutter
Authors: Mevlüt Burak Dalmış, Kemal Yaman
Abstract:
In this study, flutter characteristics of cantilever rectangular plate structure under incompressible flow regime are investigated by comparing the results of commercial flutter analysis program ZAERO© with wind tunnel tests conducted in Ankara Wind Tunnel (ART). A rectangular polycarbonate (PC) plate, 5x125x1000 mm in dimensions, is used for both numerical and experimental investigations. Analysis and test results are very compatible with each other. A comparison between two different solution methods (g and k-method) of ZAERO© is also done. It is seen that, k-method gives closer result than the other one. However, g-method results are on conservative side and it is better to use conservative results namely g-method results. Even if the modal analysis results are used for the flutter analysis for this simple structure, a modal test should be conducted in order to validate the modal analysis results to have accurate flutter analysis results for more complicated structures.Keywords: flutter, plate, subsonic flow, wind tunnel
Procedia PDF Downloads 51619704 A Method of Effective Planning and Control of Industrial Facility Energy Consumption
Authors: Aleksandra Aleksandrovna Filimonova, Lev Sergeevich Kazarinov, Tatyana Aleksandrovna Barbasova
Abstract:
A method of effective planning and control of industrial facility energy consumption is offered. The method allows to optimally arrange the management and full control of complex production facilities in accordance with the criteria of minimal technical and economic losses at the forecasting control. The method is based on the optimal construction of the power efficiency characteristics with the prescribed accuracy. The problem of optimal designing of the forecasting model is solved on the basis of three criteria: maximizing the weighted sum of the points of forecasting with the prescribed accuracy; the solving of the problem by the standard principles at the incomplete statistic data on the basis of minimization of the regularized function; minimizing the technical and economic losses due to the forecasting errors.Keywords: energy consumption, energy efficiency, energy management system, forecasting model, power efficiency characteristics
Procedia PDF Downloads 39019703 The Development of Liquid Chromatography Tandem Mass Spectrometry Method for Citrinin Determination in Dry-Fermented Meat Products
Authors: Ana Vulic, Tina Lesic, Nina Kudumija, Maja Kis, Manuela Zadravec, Nada Vahcic, Tomaz Polak, Jelka Pleadin
Abstract:
Mycotoxins are toxic secondary metabolites produced by numerous types of molds. They can contaminate both food and feed so that they represent a serious public health concern. Production of dry-fermented meat products involves ripening, during which molds can overgrow the product surface, produce mycotoxins, and consequently contaminate the final product. Citrinin is a mycotoxin produced mainly by the Penicillium citrinum. Data on citrinin occurrence in both food and feed are limited. Therefore, there is a need for research on citrinin occurrence in these types of meat products. The LC-MS/MS method for citrinin determination was developed and validated. Sample preparation was performed using immunoaffinity columns, which resulted in clean sample extracts. Method validation included the determination of the limit of detection (LOD), the limit of quantification (LOQ), recovery, linearity, and matrix effect in accordance to the latest validation guidance. The determined LOD and LOQ were 0.60 µg/kg and 1.98 µg/kg, respectively, showing a good method sensitivity. The method was tested for its linearity in the calibration range of 1 µg/L to 10 µg/L. The recovery was 100.9 %, while the matrix effect was 0.7 %. This method was employed in the analysis of 47 samples of dry-fermented sausages collected from local households. Citrinin wasn’t detected in any of these samples, probably because of the short ripening period of the tested sausages that takes three months tops. The developed method shall be used to test other types of traditional dry-cured products, such as prosciuttos, whose surface is usually more heavily overgrown by surface molds due to the longer ripening period.Keywords: citrinin, dry-fermented meat products, LC-MS/MS, mycotoxins
Procedia PDF Downloads 118