Search results for: regime equations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2475

Search results for: regime equations

225 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 465
224 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase

Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos

Abstract:

Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.

Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling

Procedia PDF Downloads 334
223 Predictions for the Anisotropy in Thermal Conductivity in Polymers Subjected to Model Flows by Combination of the eXtended Pom-Pom Model and the Stress-Thermal Rule

Authors: David Nieto Simavilla, Wilco M. H. Verbeeten

Abstract:

The viscoelastic behavior of polymeric flows under isothermal conditions has been extensively researched. However, most of the processing of polymeric materials occurs under non-isothermal conditions and understanding the linkage between the thermo-physical properties and the process state variables remains a challenge. Furthermore, the cost and energy required to manufacture, recycle and dispose polymers is strongly affected by the thermo-physical properties and their dependence on state variables such as temperature and stress. Experiments show that thermal conductivity in flowing polymers is anisotropic (i.e. direction dependent). This phenomenon has been previously omitted in the study and simulation of industrially relevant flows. Our work combines experimental evidence of a universal relationship between thermal conductivity and stress tensors (i.e. the stress-thermal rule) with differential constitutive equations for the viscoelastic behavior of polymers to provide predictions for the anisotropy in thermal conductivity in uniaxial, planar, equibiaxial and shear flow in commercial polymers. A particular focus is placed on the eXtended Pom-Pom model which is able to capture the non-linear behavior in both shear and elongation flows. The predictions provided by this approach are amenable to implementation in finite elements packages, since viscoelastic and thermal behavior can be described by a single equation. Our results include predictions for flow-induced anisotropy in thermal conductivity for low and high density polyethylene as well as confirmation of our method through comparison with a number of thermoplastic systems for which measurements of anisotropy in thermal conductivity are available. Remarkably, this approach allows for universal predictions of anisotropy in thermal conductivity that can be used in simulations of complex flows in which only the most fundamental rheological behavior of the material has been previously characterized (i.e. there is no need for additional adjusting parameters other than those in the constitutive model). Accounting for polymers anisotropy in thermal conductivity in industrially relevant flows benefits the optimization of manufacturing processes as well as the mechanical and thermal performance of finalized plastic products during use.

Keywords: anisotropy, differential constitutive models, flow simulations in polymers, thermal conductivity

Procedia PDF Downloads 163
222 Development and Validation of Cylindrical Linear Oscillating Generator

Authors: Sungin Jeong

Abstract:

This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.

Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator

Procedia PDF Downloads 181
221 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria

Authors: K. Bencherif, M. Bellifa

Abstract:

The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.

Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen

Procedia PDF Downloads 375
220 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine

Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko

Abstract:

This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system

Procedia PDF Downloads 202
219 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management

Authors: A. Giannakopoulos, S. B. Buckley

Abstract:

Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.

Keywords: critical thinking, knowledge management, mathematics, problem solving

Procedia PDF Downloads 578
218 Mathematical Modeling and Simulation of Convective Heat Transfer System in Adjustable Flat Collector Orientation for Commercial Solar Dryers

Authors: Adeaga Ibiyemi Iyabo, Adeaga Oyetunde Adeoye

Abstract:

Interestingly, mechanical drying methods has played a major role in the commercialization of agricultural and agricultural allied sectors. In the overall, drying enhances the favorable storability and preservation of agricultural produce which in turn promotes its producibility, marketability, salability, and profitability. Recent researches have shown that solar drying is easier, affordable, controllable, and of course, cleaner and purer than other means of drying methods. It is, therefore, needful to persistently appraise solar dryers with a view to improving on the existing advantages. In this paper, mathematical equations were formulated for solar dryer using mass conservation law, material balance law and least cost savings method. Computer codes were written in Visual Basic.Net. The developed computer software, which considered Ibadan, a strategic south-western geographical location in Nigeria, was used to investigate the relationship between variable orientation angle of flat plate collector on solar energy trapped, derived monthly heat load, available energy supplied by solar and fraction supplied by solar energy when 50000 Kg/Month of produce was dried over a year. At variable collector tilt angle of 10°.13°,15°,18°, 20°, the derived monthly heat load, available energy supplied by solar were 1211224.63MJ, 102121.34MJ, 0.111; 3299274.63MJ, 10121.34MJ, 0.132; 5999364.706MJ, 171222.859MJ, 0.286; 4211224.63MJ, 132121.34MJ, 0.121; 2200224.63MJ, 112121.34MJ, 0.104, respectively .These results showed that if optimum collector angle is not reached, those factors needed for efficient and cost reduction drying will be difficult to attain. Therefore, this software has revealed that off - optimum collector angle in commercial solar drying does not worth it, hence the importance of the software in decision making as to the optimum collector angle of orientation.

Keywords: energy, ibadan, heat - load, visual-basic.net

Procedia PDF Downloads 394
217 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 68
216 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 548
215 Physics-Informed Neural Network for Predicting Strain Demand in Inelastic Pipes under Ground Movement with Geometric and Soil Resistance Nonlinearities

Authors: Pouya Taraghi, Yong Li, Nader Yoosef-Ghodsi, Muntaseer Kainat, Samer Adeeb

Abstract:

Buried pipelines play a crucial role in the transportation of energy products such as oil, gas, and various chemical fluids, ensuring their efficient and safe distribution. However, these pipelines are often susceptible to ground movements caused by geohazards like landslides, fault movements, lateral spreading, and more. Such ground movements can lead to strain-induced failures in pipes, resulting in leaks or explosions, leading to fires, financial losses, environmental contamination, and even loss of human life. Therefore, it is essential to study how buried pipelines respond when traversing geohazard-prone areas to assess the potential impact of ground movement on pipeline design. As such, this study introduces an approach called the Physics-Informed Neural Network (PINN) to predict the strain demand in inelastic pipes subjected to permanent ground displacement (PGD). This method uses a deep learning framework that does not require training data and makes it feasible to consider more realistic assumptions regarding existing nonlinearities. It leverages the underlying physics described by differential equations to approximate the solution. The study analyzes various scenarios involving different geohazard types, PGD values, and crossing angles, comparing the predictions with results obtained from finite element methods. The findings demonstrate a good agreement between the results of the proposed method and the finite element method, highlighting its potential as a simulation-free, data-free, and meshless alternative. This study paves the way for further advancements, such as the simulation-free reliability assessment of pipes subjected to PGD, as part of ongoing research that leverages the proposed method.

Keywords: strain demand, inelastic pipe, permanent ground displacement, machine learning, physics-informed neural network

Procedia PDF Downloads 45
214 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 181
213 CFD Simulation of the Pressure Distribution in the Upper Airway of an Obstructive Sleep Apnea Patient

Authors: Christina Hagen, Pragathi Kamale Gurmurthy, Thorsten M. Buzug

Abstract:

CFD simulations are performed in the upper airway of a patient suffering from obstructive sleep apnea (OSA) that is a sleep related breathing disorder characterized by repetitive partial or complete closures of the upper airways. The simulations are aimed at getting a better understanding of the pathophysiological flow patterns in an OSA patient. The simulation is compared to medical data of a sleep endoscopic examination under sedation. A digital model consisting of surface triangles of the upper airway is extracted from the MR images by a region growing segmentation process and is followed by a careful manual refinement. The computational domain includes the nasal cavity with the nostrils as the inlet areas and the pharyngeal volume with an outlet underneath the larynx. At the nostrils a flat inflow velocity profile is prescribed by choosing the velocity such that a volume flow rate of 150 ml/s is reached. Behind the larynx at the outlet a pressure of -10 Pa is prescribed. The stationary incompressible Navier-Stokes equations are numerically solved using finite elements. A grid convergence study has been performed. The results show an amplification of the maximal velocity of about 2.5 times the inlet velocity at a constriction of the pharyngeal volume in the area of the tongue. It is the same region that also shows the highest pressure drop from about 5 Pa. This is in agreement with the sleep endoscopic examinations of the same patient under sedation showing complete contractions in the area of the tongue. CFD simulations can become a useful tool in the diagnosis and therapy of obstructive sleep apnea by giving insight into the patient’s individual fluid dynamical situation in the upper airways giving a better understanding of the disease where experimental measurements are not feasible. Within this study, it could been shown on one hand that constriction areas within the upper airway lead to a significant pressure drop and on the other hand a good agreement of the area of pressure drop and the area of contraction could be shown.

Keywords: biomedical engineering, obstructive sleep apnea, pharynx, upper airways

Procedia PDF Downloads 289
212 Maintaining Energy Security in Natural Gas Pipeline Operations by Empowering Process Safety Principles Through Alarm Management Applications

Authors: Huseyin Sinan Gunesli

Abstract:

Process Safety Management is a disciplined framework for managing the integrity of systems and processes that handle hazardous substances. It relies on good design principles, well-implemented automation systems, and operating and maintenance practices. Alarm Management Systems play a critically important role in the safe and efficient operation of modern industrial plants. In that respect, Alarm Management is one of the critical factors feeding the safe operations of the plants in the manner of applying effective process safety principles. Trans Anatolian Natural Gas Pipeline (TANAP) is part of the Southern Gas Corridor, which extends from the Caspian Sea to Italy. TANAP transports Natural Gas from the Shah Deniz gas field of Azerbaijan, and possibly from other neighboring countries, to Turkey and through Trans Adriatic Pipeline (TAP) Pipeline to Europe. TANAP plays a crucial role in maintaining Energy Security for the region and Europe. In that respect, the application of Process Safety principles is vital to deliver safe, reliable and efficient Natural Gas delivery to Shippers both in the region and Europe. Effective Alarm Management is one of those Process Safety principles which feeds safe operations of the TANAP pipeline. Alarm Philosophy was designed and implemented in TANAP Pipeline according to the relevant standards. However, it is essential to manage the alarms received in the control room effectively to maintain safe operations. In that respect, TANAP has commenced Alarm Management & Rationalization program as of February 2022 after transferring to Plateau Regime, reaching the design parameters. While Alarm Rationalization started, there were more than circa 2300 alarms received per hour from one of the compressor stations. After applying alarm management principles such as reviewing and removal of bad actors, standing, stale, chattering, fleeting alarms, comprehensive review and revision of alarm set points through a change management principle, conducting alarm audits/design verification and etc., it has been achieved to reduce down to circa 40 alarms per hour. After the successful implementation of alarm management principles as specified above, the number of alarms has been reduced to industry standards. That significantly improved operator vigilance to focus on mainly important and critical alarms to avoid any excursion beyond safe operating limits leading to any potential process safety events. Following the ‟What Gets Measured, Gets Managed” principle, TANAP has identified key Performance Indicators (KPIs) to manage Process Safety principles effectively, where Alarm Management has formed one of the key parameters of those KPIs. However, review and analysis of the alarms were performed manually. Without utilizing Alarm Management Software, achieving full compliance with international standards is almost infeasible. In that respect, TANAP has started using one of the industry-wide known Alarm Management Applications to maintain full review and analysis of alarms and define actions as required. That actually significantly empowered TANAP’s process safety principles in terms of Alarm Management.

Keywords: process safety principles, energy security, natural gas pipeline operations, alarm rationalization, alarm management, alarm management application

Procedia PDF Downloads 78
211 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 321
210 Weakly Non-Linear Stability Analysis of Newtonian Liquids and Nanoliquids in Shallow, Square and Tall High-Porosity Enclosures

Authors: Pradeep G. Siddheshwar, K. M. Lakshmi

Abstract:

The present study deals with weakly non-linear stability analysis of Rayleigh-Benard-Brinkman convection in nanoliquid-saturated porous enclosures. The modified-Buongiorno-Brinkman model (MBBM) is used for the conservation of linear momentum in a nanoliquid-saturated-porous medium under the assumption of Boussinesq approximation. Thermal equilibrium is imposed between the base liquid and the nanoparticles. The thermophysical properties of nanoliquid are modeled using phenomenological laws and mixture theory. The fifth-order Lorenz model is derived for the problem and is then reduced to the first-order Ginzburg-Landau equation (GLE) using the multi-scale method. The analytical solution of the GLE for the amplitude is then used to quantify the heat transport in closed form, in terms of the Nusselt number. It is found that addition of dilute concentration of nanoparticles significantly enhances the heat transport and the dominant reason for the same is the high thermal conductivity of the nanoliquid in comparison to that of the base liquid. This aspect of nanoliquids helps in speedy removal of heat. The porous medium serves the purpose of retainment of energy in the system due to its low thermal conductivity. The present model helps in making a unified study for obtaining the results for base liquid, nanoliquid, base liquid-saturated porous medium and nanoliquid-saturated porous medium. Three different types of enclosures are considered for the study by taking different values of aspect ratio, and it is observed that heat transport in tall porous enclosure is maximum while that of shallow is the least. Detailed discussion is also made on estimating heat transport for different volume fractions of nanoparticles. Results of single-phase model are shown to be a limiting case of the present study. The study is made for three boundary combinations, viz., free-free, rigid-rigid and rigid-free.

Keywords: Boungiorno model, Ginzburg-Landau equation, Lorenz equations, porous medium

Procedia PDF Downloads 305
209 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 383
208 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 443
207 Growth Mechanism and Sensing Behaviour of Sn Doped ZnO Nanoprisms Prepared by Thermal Evaporation Technique

Authors: Sudip Kumar Sinha, Saptarshi Ghosh

Abstract:

While there’s a perpetual buzz around zinc oxide (ZnO) superstructures for their unique optical features, the versatile material has been constantly utilized to manifest tailored electronic properties through rendition of distinct morphologies. And yet, the unorthodox approach of implementing the novel 1D nanostructures of ZnO (pristine or doped) for volatile sensing applications has ample scope to accommodate new unconventional morphologies. In the last two decades, solid-state sensors have attracted much curiosity for their relevance in identifying pollutant, toxic and other industrial gases. In particular gas sensors based on metal oxide semiconducting (wide Eg) nanomaterials have recently attracted intensive attention owing to their high sensitivity and fast response and recovery time. These materials when exposed to air, the atmospheric O2 dissociates and get absorb on the surface of the sensors by trapping the outermost shell electrons. Finally a depleted zone on the surface of the sensors is formed, that enhances the potential barrier height at grain boundary . Once a target gas is exposed to the sensor, the chemical interaction between the chemisorbed oxygen and the specific gas liberates the trapped electrons. Therefore altering the amount of adsorbate is a considerable approach to improve the sensitivity of any target gas/vapour molecule. Likewise, this study presents a spontaneous but self catalytic creation of Sn-doped ZnO hexagonal nanoprisms on Si (100) substrates through thermal evaporation-condensation method, and their subsequent deployment for volatile sensing. In particular, the sensors were utilized to detect molecules of ethanol, acetone and ammonia below their permissible exposure limits which returned sensitivities of around 85%, 80% and 50% respectively. The influence of Sn concentration on the growth, microstructural and optical properties of the nanoprisms along with its role in augmenting the sensing parameters has been detailed. The single-crystalline nanostructures have a typical diameter ranging from 300 to 500 nm and a length that extends up to few micrometers. HRTEM images confirmed the hexagonal crystallography for the nanoprisms, while SAED pattern asserted the single crystalline nature. The growth habit is along the low index <0001>directions. It has been seen that the growth mechanism of the as-deposited nanostructures are directly influenced by varying supersaturation ratio, fairly high substrate temperatures, and specified surface defects in certain crystallographic planes, all acting cooperatively decide the final product morphology. Room temperature photoluminescence (PL) spectra of this rod like structures exhibits a weak ultraviolet (UV) emission peak at around 380 nm and a broad green emission peak in the 505 nm regime. An estimate of the sensing parameters against dispensed target molecules highlighted the potential for the nanoprisms as an effective volatile sensing material. The Sn-doped ZnO nanostructures with unique prismatic morphology may find important applications in various chemical sensors as well as other potential nanodevices.

Keywords: gas sensor, HRTEM, photoluminescence, ultraviolet, zinc oxide

Procedia PDF Downloads 223
206 Investigating the Application of Composting for Phosphorous Recovery from Alum Precipitated and Ferric Precipitated Sludge

Authors: Saba Vahedi, Qiuyan Yuan

Abstract:

A vast majority of small municipalities and First Nations communities in Manitoba operate facultative or aerated lagoons for wastewater treatment, and most of them use Ferric Chloride (FeCl3) or alum (usually in the form of Al2(SO4)3 ·18H2O) as coagulant for phosphorous removal. The insoluble particles that form during the coagulation process result in a massive volume of sludge which is typically left in the lagoons. Therefore, phosphorous, which is a valuable nutrient, is lost in the process. In this project, the complete recovery of phosphorous from the sludge that is produced in the process of phosphorous removal from wastewater lagoons by using a controlled composting process is investigated. Objective The main objective of this project is to compost alum precipitated sludge that is produced in the process of phosphorous removal in wastewater treatment lagoons in Manitoba. The ultimate goal is to have a product that will meet the characteristics of Class A biosolids in Canada. A number of parameters, including the bioavailability of nutrients in the composted sludge and the toxicity of the sludge, will be evaluated Investigating the bioavailability of phosphorous in the final compost product. The compost will be used as a source of P compared to a commercial fertilizer (monoammonium phosphate MAP) Experimental setup Three different batches of composts piles have been run using the Alum sludge and Ferric sludge. The alum phosphate sludge was collected from an innovative phosphorous removal system at the RM of Taché . The collected sludge was sent to ALS laboratory to analyze the C/N ratio, TP, TN, TC, TAl, moisture contents, pH, and metals concentrations. Wood chips as the bulking agent were collected at the RM of Taché landfill The sludge in the three piles were mixed with 3x dry woodchips. The mixture was turned every week manually. The temperature, the moisture content, and pH were monitored twice a week. The temperature of the mixtures was remained above 55 °C for two weeks. Each pile was kept for ten weeks to get mature. The final products have been applied to two different plants to investigate the bioavailability of P in the compost product as well as the toxicity of the product. The two types of plants were selected based on their sensitivity, growth time, and their compatibility with the Manitoba climate, which are Canola, and switchgrass. The pots are weighed and watered every day to replenish moisture lost by evapotranspiration. A control experiment is also conducted by using topsoil soil and chemical fertilizers (MAP). The experiment will be carried out in a growth room maintained at a day/night temperature regime of 25/15°C, a relative humidity of 60%, and a corresponding photoperiod of 16 h. A total of three cropping (seeding to harvest) cycles need be completed, with each cycle at 50 d in duration. Harvested biomass must be weighed and oven-dried for 72 h at 60°C. The first cycle of growth Canola and Switchgrasses in the alum sludge compost, harvested at the day 50, oven dried, chopped into bits and fine ground in a mill grinder (< 0.2mm), and digested using the wet oxidation method in which plant tissue samples were digested with H2SO4 (99.7%) and H2O2 (30%) in an acid block digester. The digested plant samples need to be analyzed to measure the amount of total phosphorus.

Keywords: wastewater treatment, phosphorus removal, composting alum sludge, bioavailibility of pohosphorus

Procedia PDF Downloads 60
205 Gender Differences in Morphological Predictors of Running Ability: A Comprehensive Analysis of Male and Female Athletes in Cape Coast Metropolis, Ghana

Authors: Stephen Anim, Emmanuel O. Sarpong, Daniel Apaak

Abstract:

This study investigates the relationship between morphological predictors and running ability, emphasizing gender-specific variations among male and female athletes in Cape Coast Metropolis (CCM), Ghana. The dynamic interplay between an athlete's physique and their performance capabilities holds particular relevance in the realm of sports science, influencing training methodologies and talent identification processes. The research aims to contribute comprehensive insights into the morphological determinants of running proficiency, with a specific focus on the local athletic community in Cape Coast Metropolis. Utilizing a correlational research design, a thorough analysis of morphological features, encompassing 22 morphological features including body weight, 6 measurements related to body length, 7 body girth, and knee diameter, and 7 skinfold measurements against 50m dash, among male and female athletes, was conducted. The study involved 420 athletes both male (N=210) and female (N=210) aged 16-22 from 10 Senior High Schools (SHS) in the Cape Coast Metropolis, providing a representative sample of the local athletic community. The collected data were statistically analysed using means and standard deviation, and stepwise multiple regression to determine how morphological variables contribute to and predict running proficiency outcomes. The investigation revealed that athletes from Senior High Schools (SHS) in Cape Coast Metropolis (CCM) exhibit well-developed physiques and sufficient fitness levels suitable for overall athletic performance, taking into account gender differences. Moreover, the findings suggested that approximately 77% of running ability could be attributed to morphological factors, leading to diverse predictive models for male and female athletes within SHS in CCM, Ghana. Consequently, these formulated equations hold promise for predicting running ability among young athletes, particularly in the context of SHS environments.

Keywords: body fat, body girth, body length, morphological features, running ability, senior high school

Procedia PDF Downloads 40
204 The Role of Agroforestry Practices in Climate Change Mitigation in Western Kenya

Authors: Humphrey Agevi, Harrison Tsingalia, Richard Onwonga, Shem Kuyah

Abstract:

Most of the world ecosystems have been affected by the effects of climate change. Efforts have been made to mitigate against climate change effects. While most studies have been done in forest ecosystems and pure plant plantations, trees on farms including agroforestry have only received attention recently. Agroforestry systems and tree cover on agricultural lands make an important contribution to climate change mitigation but are not systematically accounted for in the global carbon budgets. This study sought to: (i) determine tree diversity in different agroforestry practices; (ii) determine tree biomass in different agroforestry practices. Study area was determined according to the Land degradation surveillance framework (LSDF). Two study sites were established. At each of the site, a 5km x 10km block was established on a map using Google maps and satellite images. Way points were then uploaded in a GPS helped locate the blocks on the ground. In each of the blocks, Nine (8) sentinel clusters measuring 1km x 1km were randomized. Randomization was done in a common spreadsheet program and later be downloaded to a Global Positioning System (GPS) so that during surveys the researchers were able to navigate to the sampling points. In each of the sentinel cluster, two farm boundaries were randomly identified for convenience and to avoid bias. This led to 16 farms in Kakamega South and 16 farms in Kakamega North totalling to 32 farms in Kakamega Site. Species diversity was determined using Shannon wiener index. Tree biomass was determined using allometric equation. Two agroforestry practices were found; homegarden and hedgerow. Species diversity ranged from 0.25-2.7 with a mean of 1.8 ± 0.10. Species diversity in homegarden ranged from 1-2.7 with a mean of 1.98± 0.14. Hedgerow species diversity ranged from 0.25-2.52 with a mean of 1.74± 0.11. Total Aboveground Biomass (AGB) determined was 13.96±0.37 Mgha-1. Homegarden with the highest abundance of trees had higher above ground biomass (AGB) compared to hedgerow agroforestry. This study is timely as carbon budgets in the agroforestry can be incorporated in the global carbon budgets and improve the accuracy of national reporting of greenhouse gases.

Keywords: agroforestry, allometric equations, biomass, climate change

Procedia PDF Downloads 337
203 The Impact of Small-Scale Irrigation on the Income of Rural Households and Determinants of Its Adoption: Evidence from Dehana Woreda, Ethiopia

Authors: Wondmnew Derebe Yohannis

Abstract:

Farming irrigation plays a crucial role in rural development strategies, impacting both annual household income and livelihood. This research aims to evaluate the factors influencing irrigation participation and assess the impact of small-scale irrigation on rural households' annual income. The study collected data from 287 farmers in the Dahana district of northern Ethiopia. The research investigates the driving forces behind farmers' decisions to adopt small-scale irrigation and its effect on annual income gain. The findings reveal that several factors positively influence the probability of adoption, including access to credit, cultivated land size, livestock holding, extension contact, and the education level of the household head. Conversely, the distance to local markets and water schemes negatively affects the likelihood of adoption. To understand the differences in annual income between farm households that adopted irrigation and those that did not, a simultaneous equations model with endogenous switching regression is estimated. This accounts for the heterogeneity in the adoption decision and unobservable characteristics of farmers and their farms. The analysis compares the expected income gain under actual and counterfactual scenarios, considering whether the farm household adopted irrigation or not. The study reveals that the group of farm households that adopted irrigation has distinct characteristics compared to those that did not adopt it. Furthermore, the research demonstrates that the adoption of irrigation practices leads to an increase in annual income. Interestingly, the impact of small-scale irrigation on annual income is greater for the farm households that actually adopted irrigation compared to those in the counterfactual scenario where they did not adopt. Based on the findings, the researcher concludes that small-scale irrigation is a practical solution for meeting household financial needs in the study area. It is recommended that investments in small-scale irrigation continue to further improve the livelihoods of rural farming communities by enhancing annual income gains.

Keywords: small-scale irrigation, income, rural farm households, endogenous switching regression, user, non-user

Procedia PDF Downloads 38
202 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions

Authors: Varvara Roubtsova, Mohamed Chekired

Abstract:

Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.

Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics

Procedia PDF Downloads 287
201 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector

Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.

Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation

Procedia PDF Downloads 126
200 Study of the Design and Simulation Work for an Artificial Heart

Authors: Mohammed Eltayeb Salih Elamin

Abstract:

This study discusses the concept of the artificial heart using engineering concepts, of the fluid mechanics and the characteristics of the non-Newtonian fluid. For the purpose to serve heart patients and improve aspects of their lives and since the Statistics review according to world health organization (WHO) says that heart disease and blood vessels are the first cause of death in the world. Statistics shows that 30% of the death cases in the world by the heart disease, so simply we can consider it as the number one leading cause of death in the entire world is heart failure. And since the heart implantation become a very difficult and not always available, the idea of the artificial heart become very essential. So it’s important that we participate in the developing this idea by searching and finding the weakness point in the earlier designs and hoping for improving it for the best of humanity. In this study a pump was designed in order to pump blood to the human body and taking into account all the factors that allows it to replace the human heart, in order to work at the same characteristics and the efficiency of the human heart. The pump was designed on the idea of the diaphragm pump. Three models of blood obtained from the blood real characteristics and all of these models were simulated in order to study the effect of the pumping work on the fluid. After that, we study the properties of this pump by using Ansys15 software to simulate blood flow inside the pump and the amount of stress that it will go under. The 3D geometries modeling was done using SOLID WORKS and the geometries then imported to Ansys design modeler which is used during the pre-processing procedure. The solver used throughout the study is Ansys FLUENT. This is a tool used to analysis the fluid flow troubles and the general well-known term used for this branch of science is known as Computational Fluid Dynamics (CFD). Basically, Design Modeler used during the pre-processing procedure which is a crucial step before the start of the fluid flow problem. Some of the key operations are the geometry creations which specify the domain of the fluid flow problem. Next is mesh generation which means discretization of the domain to solve governing equations at each cell and later, specify the boundary zones to apply boundary conditions for the problem. Finally, the pre–processed work will be saved at the Ansys workbench for future work continuation.

Keywords: Artificial heart, computational fluid dynamic heart chamber, design, pump

Procedia PDF Downloads 442
199 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver

Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera

Abstract:

In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.

Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids

Procedia PDF Downloads 398
198 Ecosystem Carbon Stocks Vary in Reference to the Models Used, Socioecological Factors and Agroforestry Practices in Central Ethiopia

Authors: Gadisa Demie, Mesele Negash, Zerihun Asrat, Lojka Bohdan

Abstract:

Deforestation and forest degradation in the tropics have led to significant carbon (C) emissions. Agroforestry (AF) is a suitable land-use option for tackling such declines in ecosystem services, including climate change mitigation. However, it is unclear how biomass models, AF practices, and socio-ecological factors determine these roles, which hinders the implementation of climate change mitigation initiatives. This study aimed to estimate the ecosystem C stocks of the studied AF practices in relation to socio-ecological variables in central Ethiopia. Out of 243 AF farms inventoried, 108 were chosen at random from three AF practices to estimate their biomass and soil organic carbon. A total of 432 soil samples were collected from 0–30 and 30–60 cm soil depths; 216 samples were taken for each soil organic carbon fraction (%C) and bulk density computation. The study found that the currently developed allometric equations were the most accurate to estimate biomass C for trees growing in the landscape when compared to previous models. The study found higher overall biomass C in woodlots (165.62 Mg ha-¹) than in homegardens (134.07 Mg ha-¹) and parklands (19.98 Mg ha-¹). Conversely, overall, SOC was higher for homegardens (143.88 Mg ha-¹), but lower for parklands (53.42 Mg ha-¹). The ecosystem C stock was comparable between homegardens (277.95 Mg ha-¹) and woodlots (275.44 Mg ha-¹). The study found that elevation, wealthy levels, AF farm age, and size have a positive and significant (P < 0.05) effect on overall biomass and ecosystem C stocks but non-significant with slope (P > 0.05). Similarly, SOC increased with increasing elevation, AF farm age, and wealthy status but decreased with slope and non-significant with AF farm size. The study also showed that species diversity had a positive (P <0.05) effect on overall biomass C stocks in homegardens. The overall study highlights that AF practices have a great potential to lock up more carbon in biomass and soils; however, these potentials were determined by socioecological variables. Thus, these factors should be considered in management strategies that preserve trees in agricultural landscapes in order to mitigate climate change and support the livelihoods of farmers.

Keywords: agricultural landscape, biomass, climate change, soil organic carbon

Procedia PDF Downloads 33
197 Predicting Child Attachment Style Based on Positive and Safe Parenting Components and Mediating Maternal Attachment Style in Children With ADHD

Authors: Alireza Monzavi Chaleshtari, Maryam Aliakbari

Abstract:

Objective: The aim of this study was to investigate the prediction of child attachment style based on a positive and safe combination parenting method mediated by maternal attachment styles in children with attention deficit hyperactivity disorder. Method: The design of the present study was descriptive of correlation and structural equations and applied in terms of purpose. The population of this study includes all children with attention deficit hyperactivity disorder living in Chaharmahal and Bakhtiari province and their mothers. The sample size of the above study includes 165children with attention deficit hyperactivity disorder in Chaharmahal and Bakhtiari province with their mothers, who were selected by purposive sampling method based on the inclusion criteria. The obtained data were analyzed in two sections of descriptive and inferential statistics. In the descriptive statistics section, statistical indices of mean, standard deviation, frequency distribution table and graph were used. In the inferential section, according to the nature of the hypotheses and objectives of the research, the data were analyzed using Pearson correlation coefficient tests, Bootstrap test and structural equation model. findings:The results of structural equation modeling showed that the research models fit and showed a positive and safe combination parenting style mediated by the mother attachment style has an indirect effect on the child attachment style. Also, a positive and safe combined parenting style has a direct relationship with child attachment style, and She has a mother attachment style. Conclusion:The results and findings of the present study show that there is a significant relationship between positive and safe combination parenting methods and attachment styles of children with attention deficit hyperactivity disorder with maternal attachment style mediation. Therefore, it can be expected that parents using a positive and safe combination232 parenting method can effectively lead to secure attachment in children with attention deficit hyperactivity disorder.

Keywords: child attachment style, positive and safe parenting, maternal attachment style, ADHD

Procedia PDF Downloads 43
196 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control

Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul

Abstract:

Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).

Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning

Procedia PDF Downloads 130