Search results for: flow rate measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13951

Search results for: flow rate measurement

11311 Dissolution of Zeolite as a Sorbent in Flue Gas Desulphurization Process Using a pH Stat Apparatus

Authors: Hilary Rutto, John Kabuba

Abstract:

Sulphur dioxide is a harmful gaseous product that needs to be minimized in the atmosphere. This research work investigates the use of zeolite as a possible additive that can improve the sulphur dioxide capture in wet flue gas desulphurisation dissolution process. This work determines the effect of temperature, solid to liquid ratio, acid concentration and stirring speed on the leaching of zeolite using a pH stat apparatus. The atomic absorption spectrometer was used to measure the calcium ions from the solution. It was found that the dissolution rate of zeolite decreased with increase in solid to liquid ratio and increases with increase in temperature, stirring speed and acid concentration. The activation energy for the dissolution rate of zeolite in hydrochloric acid was found to be 9.29kJ/mol. and therefore the product layer diffusion was the rate limiting step.

Keywords: calcium ion, pH stat apparatus, wet flue gas desulphurization, zeolite

Procedia PDF Downloads 284
11310 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 228
11309 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 502
11308 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate

Procedia PDF Downloads 260
11307 A Case Study Approach to the Rate the Eco Sensitivity of Green Infrastructure Solutions

Authors: S. Saroop, D. Allopi

Abstract:

In the area of civil infrastructure, there is an urgent need to apply technologies that deliver infrastructure sustainably in a way that is cost-effective. Civil engineering projects can have a significant impact on ecological and social systems if not correctly planned, designed and implemented. It can impact climate change by addressing the issue of flooding and sustainability. Poor design choices now can result in future generations to live in a climate with depleted resources and without green spaces. The objectives of the research study were to rate the sensitivity of various greener infrastructure technologies that can be used in township infrastructure, at the various stages of the project. This paper discusses the Green Township Infrastructure Design Toolkit, that is used to rate the sustainability of infrastructure service projects. Various case studies were undertaken on a range of infrastructure projects to test the sensitivity of various design solution against sustainability criteria. The Green reporting tools ensure efficient, economical and sustainable provision of infrastructure services.

Keywords: eco-efficiency, green infrastructure, green technology, infrastructure design, sustainable development

Procedia PDF Downloads 382
11306 Profit Share in Income: An Analysis of Its Influence on Macroeconomic Performance

Authors: Alain Villemeur

Abstract:

The relationships between the profit share in income on the one hand and the growth rates of output and employment on the other hand have been studied for 17 advanced economies since 1961. The vast majority (98%) of annual values for the profit share fall between 20% and 40%, with an average value of 33.9%. For the 17 advanced economies, Gross Domestic Product and productivity growth rates tend to fall as the profit share in income rises. For the employment growth rates, the relationships are complex; nevertheless, over long periods (1961-2000), it appears that the more job-creating economies are Australia, Canada, and the United States; they have experienced a profit share close to 1/3. This raises a number of questions, not least the value of 1/3 for the profit share and its role in macroeconomic fundamentals. To explain these facts, an endogenous growth model is developed. This growth and distribution model reconciles the great ideas of Kaldor (economic growth as a chain reaction), of Keynes (effective demand and marginal efficiency of capital) and of Ricardo (importance of the wage-profit distribution) in an economy facing creative destruction. A production function is obtained, depending mainly on the growth of employment, the rate of net investment and the profit share in income. In theory, we show the existence of incentives: an incentive for job creation when the profit share is less than 1/3 and another incentive for job destruction in the opposite case. Thus, increasing the profit share can boost the employment growth rate until it reaches the value of 1/3; otherwise lowers the employment growth rate. Three key findings can be drawn from these considerations. The first reveals that the best GDP and productivity growth rates are obtained with a profit share of less than 1/3. The second is that maximum job growth is associated with a 1/3 profit share, given the existence of incentives to create more jobs when the profit share is less than 1/3 or to destroy more jobs otherwise. The third is the decline in performance (GDP growth rate and productivity growth rate) when the profit share increases. In conclusion, increasing the profit share in income weakens GDP growth or productivity growth as a long-term trend, contrary to the trickle-down hypothesis. The employment growth rate is maximum for a profit share in income of 1/3. All these lessons suggest macroeconomic policies considering the profit share in income.

Keywords: advanced countries, GDP growth, employment growth, profit share, economic policies

Procedia PDF Downloads 64
11305 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems

Authors: Joachim F. Sartor

Abstract:

According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.

Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage

Procedia PDF Downloads 151
11304 CO2 Methanation over Ru-Ni/CeO2 Catalysts

Authors: Nathalie Elia, Samer Aouad, Jane Estephane, Christophe Poupin, Bilal Nsouli, Edmond Abi Aad

Abstract:

Carbon dioxide is one of the main contributors to greenhouse effect and hence to climate change. As a result, the methanation reaction CO2(g) + 4H2(g) →CH4(g) + 2H2O (ΔH°298 = -165 kJ/mol), also known as Sabatier reaction, has received great interest as a process for the valorization of the greenhouse gas CO2 into methane which is a hydrogen-carrier gas. The methanation of CO2 is an exothermic reaction favored at low temperature and high pressure. However, this reaction requires a high energy input to activate the very stable CO2 molecule, and exhibits serious kinetic limitations. Consequently, the development of active and stable catalysts is essential to overcome these difficulties. Catalytic methanation of CO2 has been studied using catalysts containing Rh, Pd, Ru, Co and Ni on various supports. Among them, the Ni-based catalysts have been extensively investigated under various conditions for their comparable methanation activity with highly improved cost-efficiency. The addition of promoters are common strategies to increase the performance and stability of Ni catalysts. In this work, a small amount of Ru was used as a promoter for Ni catalysts supported on ceria and tested in the CO2 methanation reaction. The nickel loading was 5 wt. % and ruthenium loading is 0.5wt. %. The catalysts were prepared by successive impregnation method using Ni(NO3)2.6H2O and Ru(NO)(NO3)3 as precursors. The calcined support was impregnated with Ni(NO3)2.6H2O, dried, calcined at 600°C for 4h, and afterward, was impregnated with Ru(NO)(NO3)3. The resulting solid was dried and calcined at 600°C for 4 h. Supported monometallic catalysts were prepared likewise. The prepared solids Ru(0.5%)/CeO2, Ni(5%)/CeO2 and Ru(0.5%)-Ni(5%)/CeO2 were then reduced prior to the catalytic test under a flow of 50% H2/Ar (50 ml/min) for 4h at 500°C. Finally, their catalytic performances were evaluated in the CO2 methanation reaction, in the temperature range of 100–350°C by using a gaseous mixture of CO2 (10%) and H2 (40%) in Ar balanced at a total flow rate of 100 mL/min. The effect of pressure on the CO2 methanation was studied by varying the pressure between 1 and 10 bar. The various catalysts showed negligible CO2 conversion at temperatures lower than 250°C. The conversion of CO2 increases with increasing reaction temperature. The addition of Ru as promoter to Ni/CeO2 improved the CO2 methanation. It was shown that the CO2 conversion increases from 15 to 70% at 350°C and 1 bar. The effect of pressure on CO2 conversion was also studied. Increasing the pressure from 1 to 5 bar increases the CO2 conversion from 70% to 87%, while increasing the pressure from 5 to 10 bar increases the CO2 conversion from 87% to 91%. Ru–Ni catalysts showed excellent catalytic performance in the methanation of carbon dioxide with respect to Ni catalysts. Therefore the addition of Ru onto Ni catalysts improved remarkably the catalytic activity of Ni catalysts. It was also found that the pressure plays an important role in improving the CO2 methanation.

Keywords: CO2, methanation, nickel, ruthenium

Procedia PDF Downloads 222
11303 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí

Abstract:

A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.

Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding

Procedia PDF Downloads 96
11302 River Offtake Management Using Mathematical Modelling Tool: A Case Study of the Gorai River, Bangladesh

Authors: Sarwat Jahan, Asker Rajin Rahman

Abstract:

Management of offtake of any fluvial river is very sensitive in terms of long-term sustainability where the variation of water flow and sediment transport range are wide enough throughout a hydrological year. The Gorai River is a major distributary of the Ganges River in Bangladesh and is termed as a primary source of fresh water for the South-West part of the country. Every year, significant siltation of the Gorai offtake disconnects it from the Ganges during the dry season. As a result, the socio-economic and environmental condition of the downstream areas has been deteriorating for a few decades. To improve the overall situation of the Gorai offtake and its dependent areas, a study has been conducted by the Institute of Water Modelling, Bangladesh, in 2022. Using the mathematical morphological modeling tool MIKE 21C of DHI Water & Environment, Denmark, simulated results revealed the need for dredging/river training structures for offtake management at the Gorai offtake to ensure significant dry season flow towards the downstream. The dry season flow is found to increase significantly with the proposed river interventions, which also improves the environmental conditions in terms of salinity of the South-West zone of the country. This paper summarizes the primary findings of the analyzed results of the developed mathematical model for improving the existing condition of the Gorai River.

Keywords: Gorai river, mathematical modelling, offtake, siltation, salinity

Procedia PDF Downloads 97
11301 Possible Reasons for and Consequences of Generalizing Subgroup-Based Measurement Results to Populations: Based on Research Studies Conducted by Elementary Teachers in South Korea

Authors: Jaejun Jong

Abstract:

Many teachers in South Korea conduct research to improve the quality of their instruction. Unfortunately, many researchers generalize the results of measurements based on one subgroup to other students or to the entire population, which can cause problems. This study aims to determine examples of possible problems resulting from generalizing measurements based on one subgroup to an entire population or another group. This study is needed, as teachers’ instruction and class quality significantly affect the overall quality of education, but the quality of research conducted by teachers can become questionable due to overgeneralization. Thus, finding potential problems of overgeneralization can improve the overall quality of education. The data in this study were gathered from 145 sixth-grade elementary school students in South Korea. The result showed that students in different classes could differ significantly in various ways; thus, generalizing the results of subgroups to an entire population can engender erroneous student predictions and evaluations, which can lead to inappropriate instruction plans. This result shows that finding the reasons for such overgeneralization can significantly improve the quality of education.

Keywords: generalization, measurement, research methodology, teacher education

Procedia PDF Downloads 93
11300 Air Flows along Perforated Metal Plates with the Heat Transfer

Authors: Karel Frana, Sylvio Simon

Abstract:

The objective of the paper is a numerical study of heat transfer between perforated metal plates and the surrounding air flows. Different perforation structures can nowadays be found in various industrial products. Besides improving the mechanical properties, the perforations can intensify the heat transfer as well. The heat transfer coefficient depends on a wide range of parameters such as type of perforation, size, shape, flow properties of the surrounding air etc. The paper was focused on three different perforation structures which have been investigated from the point of the view of the production in the previous studies. To determine the heat coefficients and the Nusselt numbers, the numerical simulation approach was adopted. The calculations were performed using the OpenFOAM software. The three-dimensional, unstable, turbulent and incompressible air flow around the perforated surface metal plate was considered.

Keywords: perforations, convective heat transfers, turbulent flows, numerical simulations

Procedia PDF Downloads 580
11299 UWB Channel Estimation Using an Efficient Sub-Nyquist Sampling Scheme

Authors: Yaacoub Tina, Youssef Roua, Radoi Emanuel, Burel Gilles

Abstract:

Recently, low-complexity sub-Nyquist sampling schemes based on the Finite Rate of Innovation (FRI) theory have been introduced to sample parametric signals at minimum rates. The multichannel modulating waveforms (MCMW) is such an efficient scheme, where the received signal is mixed with an appropriate set of arbitrary waveforms, integrated and sampled at rates far below the Nyquist rate. In this paper, the MCMW scheme is adapted to the special case of ultra wideband (UWB) channel estimation, characterized by dense multipaths. First, an appropriate structure, which accounts for the bandpass spectrum feature of UWB signals, is defined. Then, a novel approach to decrease the number of processing channels and reduce the complexity of this sampling scheme is presented. Finally, the proposed concepts are validated by simulation results, obtained with real filters, in the framework of a coherent Rake receiver.

Keywords: coherent rake receiver, finite rate of innovation, sub-nyquist sampling, ultra wideband

Procedia PDF Downloads 256
11298 Robust Numerical Solution for Flow Problems

Authors: Gregor Kosec

Abstract:

Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.

Keywords: fluid flow, meshless, low Pr problem, natural convection

Procedia PDF Downloads 233
11297 A Case Comparative Study of Infant Mortality Rate in North-West Nigeria

Authors: G. I. Onwuka, A. Danbaba, S. U. Gulumbe

Abstract:

This study investigated of Infant Mortality Rate as observed at a general hospital in Kaduna-South, Kaduna State, North West Nigeria. The causes of infant Mortality were examined. The data used for this analysis were collected at the statistics unit of the Hospital. The analysis was carried out on the data using Multiple Linear regression Technique and this showed that there is linear relationship between the dependent variable (death) and the independent variables (malaria, measles, anaemia, and coronary heart disease). The resultant model also revealed that a unit increment in each of these diseases would result to a unit increment in death recorded, 98.7% of the total variation in mortality is explained by the given model. The highest number of mortality was recorded in July, 2005 and the lowest mortality recorded in October, 2009.Recommendations were however made based on the results of the study.

Keywords: infant mortality rate, multiple linear regression, diseases, serial correlation

Procedia PDF Downloads 331
11296 Computational Fluid Dynamics Simulation of Reservoir for Dwell Time Prediction

Authors: Nitin Dewangan, Nitin Kattula, Megha Anawat

Abstract:

Hydraulic reservoir is the key component in the mobile construction vehicles; most of the off-road earth moving construction machinery requires bigger side hydraulic reservoirs. Their reservoir construction is very much non-uniform and designers used such design to utilize the space available under the vehicle. There is no way to find out the space utilization of the reservoir by oil and validity of design except virtual simulation. Computational fluid dynamics (CFD) helps to predict the reservoir space utilization by vortex mapping, path line plots and dwell time prediction to make sure the design is valid and efficient for the vehicle. The dwell time acceptance criteria for effective reservoir design is 15 seconds. The paper will describe the hydraulic reservoir simulation which is carried out using CFD tool acuSolve using automated mesh strategy. The free surface flow and moving reference mesh is used to define the oil flow level inside the reservoir. The first baseline design is not able to meet the acceptance criteria, i.e., dwell time below 15 seconds because the oil entry and exit ports were very close. CFD is used to redefine the port locations for the reservoir so that oil dwell time increases in the reservoir. CFD also proposed baffle design the effective space utilization. The final design proposed through CFD analysis is used for physical validation on the machine.

Keywords: reservoir, turbulence model, transient model, level set, free-surface flow, moving frame of reference

Procedia PDF Downloads 152
11295 Steady and Oscillatory States of Swirling Flows under an Axial Magnetic Field

Authors: Brahim Mahfoud, Rachid Bessaïh

Abstract:

In this paper, a numerical study of steady and oscillatory flows with heat transfer submitted to an axial magnetic field is studied. The governing Navier-Stokes, energy, and potential equations along with appropriate boundary conditions are solved by using the finite-volume method. The flow and temperature fields are presented by stream function and isotherms, respectively. The flow between counter-rotating end disks is very unstable and reveals a great richness of structures. The results are presented for various values of the Hartmann number, Ha=5, 10, 20, and 30, and Richardson numbers , Ri=0, 0.5, 1, 2, and 4, in order to see their effects on the value of the critical Reynolds number, Recr. Stability diagrams are established according to the numerical results of this investigation. These diagrams put in evidence the dependence of Recr with the increase of Ha for various values of Ri.

Keywords: swirling, counter-rotating end disks, magnetic field, oscillatory, cylinder

Procedia PDF Downloads 324
11294 Energy Saving and Performance Evaluation of an Air Handling Unit Integrated with a Membrane Energy Exchanger for Cold Climates

Authors: Peng Liu, Maria Justo Alonso, Hans Martin Mathisen

Abstract:

A theoretical model is developed to evaluate the performance and energy saving potential of an air handling unit integrated with a membrane energy exchanger in cold climates. The recovered sensible and latent heat, fan preheating use for frost prevention and heating energy consumed by heating coil after the ventilator is compared for the air handling unit combined heat and energy exchanger respectively. A concept of coefficient of performance of air handling unit is presented and applied to assess the energy use of air handling unit (AHU) in cold climates. The analytic results indicate downsizing of the preheating coil before exchanger and heating coils after exchanger are expected since the required power to preheat and condition the air is reduced compared to heat exchanger when the MEE is integrated with AHU. Simultaneously, a superior ratio of energy recovered (RER) is obtained from AHU build-in a counter-flow MEE. The AHU with sensible-only heat exchanger has noticeably low RER, around 1 at low outdoor air temperature where the maximum energy rate is desired to condition the severe cold and dry air.

Keywords: membrane energy exchanger, cold climate, energy efficient building, HVAC

Procedia PDF Downloads 326
11293 Simple Ecofriendly Cyclodextrine-Surfactant Modified UHPLC Method for Quantification of Multivitamins in Pharmaceutical and Food Samples

Authors: Hassan M. Albishri, Abdullah Almalawi, Deia Abd El-Hady

Abstract:

A simple and ecofriendly cyclodextrine-surfactant modified UHPLC (CDS-UPLC) method for rapid and sensitive simultaneous determination of multi water-soluble vitamins such as ascorbic acid, pyridoxine hydrochloride and thiamine hydrochloride in commercial pharmaceuticals and milk samples have been firstly developed. Several chromatographic effective parameters have been changed in a systematic way. Adequate results have been achieved by a mixture of β-cyclodextrine (β-CD) and cationic surfactant under acidic conditions as an eco-friendly isocratic mobile phase at 0.02 mL/min flow rate. The proposed CDS- UHPLC method has been validated for the quantitative determination of multivitamins within 8 min in food and pharmaceutical samples. The method showed excellent linearity for analytes in a wide range of 10-1000 ng/µL. The repeatability and reproducibility of data were about 2.14 and 4.69 RSD%, respectively. The limits of detection (LODs) of analytes ranged between 0.86 and 5.6 ng/µL with a range of 81.8 -115.8% recoveries in tablets and milk samples. The current first CDS- UHPLC method could have vast applications for the precise analysis of multivitamins in complicated matrices.

Keywords: ecofriendly, cyclodextrine-surfactant, multivitamins, UHPLC

Procedia PDF Downloads 273
11292 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms

Authors: Naina Mahajan, Bikram Pal Kaur

Abstract:

The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.

Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool

Procedia PDF Downloads 338
11291 EcoLife and Greed Index Measurement: An Alternative Tool to Promote Sustainable Communities and Eco-Justice

Authors: Louk Aourelien Andrianos, Edward Dommen, Athena Peralta

Abstract:

Greed, as epitomized by overconsumption of natural resources, is at the root of ecological destruction and unsustainability of modern societies. Presently economies rely on unrestricted structural greed which fuels unlimited economic growth, overconsumption, and individualistic competitive behavior. Structural greed undermines the life support system on earth and threatens ecological integrity, social justice and peace. The World Council of Churches (WCC) has developed a program on ecological and economic justice (EEJ) with the aim to promote an economy of life where the economy is embedded in society and society in ecology. This paper aims at analyzing and assessing the economy of life (EcoLife) by offering an empirical tool to measure and monitor the root causes and effects of unsustainability resulting from human greed on global, national, institutional and individual levels. This holistic approach is based on the integrity of ecology and economy in a society founded on justice. The paper will discuss critical questions such as ‘what is an economy of life’ and ‘how to measure and control it from the effect of greed’. A model called GLIMS, which stands for Greed Lines and Indices Measurement System is used to clarify the concept of greed and help measuring the economy of life index by fuzzy logic reasoning. The inputs of the model are from statistical indicators of natural resources consumption, financial realities, economic performance, social welfare and ethical and political facts. The outputs are concrete measures of three primary indices of ecological, economic and socio-political greed (ECOL-GI, ECON-GI, SOCI-GI) and one overall multidimensional economy of life index (EcoLife-I). EcoLife measurement aims to build awareness of an economy life and to address the effects of greed in systemic and structural aspects. It is a tool for ethical diagnosis and policy making.

Keywords: greed line, sustainability indicators, fuzzy logic, eco-justice, World Council of Churches (WCC)

Procedia PDF Downloads 320
11290 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform

Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba

Abstract:

Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.

Keywords: high level synthesis, canny edge detection, hardware accelerators, computer vision

Procedia PDF Downloads 478
11289 Parametric Influence and Optimization of Wire-EDM on Oil Hardened Non-Shrinking Steel

Authors: Nixon Kuruvila, H. V. Ravindra

Abstract:

Wire-cut Electro Discharge Machining (WEDM) is a special form of conventional EDM process in which electrode is a continuously moving conductive wire. The present study aims at determining parametric influence and optimum process parameters of Wire-EDM using Taguchi’s Technique and Genetic algorithm. The variation of the performance parameters with machining parameters was mathematically modeled by Regression analysis method. The objective functions are Dimensional Accuracy (DA) and Material Removal Rate (MRR). Experiments were designed as per Taguchi’s L16 Orthogonal Array (OA) where in Pulse-on duration, Pulse-off duration, Current, Bed-speed and Flushing rate have been considered as the important input parameters. The matrix experiments were conducted for the material Oil Hardened Non Shrinking Steel (OHNS) having the thickness of 40 mm. The results of the study reveals that among the machining parameters it is preferable to go in for lower pulse-off duration for achieving over all good performance. Regarding MRR, OHNS is to be eroded with medium pulse-off duration and higher flush rate. Finally, the validation exercise performed with the optimum levels of the process parameters. The results confirm the efficiency of the approach employed for optimization of process parameters in this study.

Keywords: dimensional accuracy (DA), regression analysis (RA), Taguchi method (TM), volumetric material removal rate (VMRR)

Procedia PDF Downloads 409
11288 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 252
11287 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm

Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou

Abstract:

Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and WOB are used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036m3/h and -2.374m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. Quantitatively calculate the best combination of funnel viscosity, final shear force and drilling time. The minimum loss rate of lost circulation wells in Shunbei area is 10m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.

Keywords: drilling and completion, drilling fluid, lost circulation, loss rate, main controlling factors, unmanned intervention algorithm

Procedia PDF Downloads 112
11286 Landfill Failure Mobility Analysis: A Probabilistic Approach

Authors: Ali Jahanfar, Brajesh Dubey, Bahram Gharabaghi, Saber Bayat Movahed

Abstract:

Ever increasing population growth of major urban centers and environmental challenges in siting new landfills have resulted in a growing trend in design of mega-landfills some with extraordinary heights and dangerously steep slopes. Landfill failure mobility risk analysis is one of the most uncertain types of dynamic rheology models due to very large inherent variabilities in the heterogeneous solid waste material shear strength properties. The waste flow of three historic dumpsite and two landfill failures were back-analyzed using run-out modeling with DAN-W model. The travel distances of the waste flow during landfill failures were calculated approach by taking into account variability in material shear strength properties. The probability distribution function for shear strength properties of the waste material were grouped into four major classed based on waste material compaction (landfills versus dumpsites) and composition (high versus low quantity) of high shear strength waste materials such as wood, metal, plastic, paper and cardboard in the waste. This paper presents a probabilistic method for estimation of the spatial extent of waste avalanches, after a potential landfill failure, to create maps of vulnerability scores to inform property owners and residents of the level of the risk.

Keywords: landfill failure, waste flow, Voellmy rheology, friction coefficient, waste compaction and type

Procedia PDF Downloads 290
11285 Analysis of Backward Supply Chain in Beverages Industry of Pakistan

Authors: Faisal Mehmood

Abstract:

In this globalization era, the supply chain management has acquired strategic importance in diverse business environments. In the current highly competitive business environment, the success of any business considerably depends on the efficiency of the supply chain. Management has now realized that due to the inefficiency of any member of supply chain, the profitability of the business will be affected. This paper proposes an analysis of backward supply chain in the beverages industry of Pakistan. Although reuse of products and materials is a common phenomenon, companies have long ignored this important part of the supply chain, known as backward supply chain or reverse logistics. The beverage industry is among the pioneers of backward supply chain or reverse logistics in Pakistan. The empty glass bottles are returned back from the point of consumption to the warehouse for refilling and reusability purposes. Due to the lack of information on reverse flow of logistics and more attention on the forward distribution, beverages industry in Pakistan is facing high rate of inefficiencies and ineffectiveness. Analysis of backward or reverse logistics practiced in beverages industry is the subject of this study in which framework dictating the current needs of market will be developed.

Keywords: backward supply chain, reverse logistics, refilling, re-usability

Procedia PDF Downloads 348
11284 Simulation and Optimization of an Annular Methanol Reformer

Authors: Shu-Bo Yang, Wei Wu, Yuan-Heng Liu

Abstract:

This research aims to design a heat-exchanger type of methanol reformer coupled with a preheating design in gPROMS® environment. The endothermic methanol steam reforming reaction (MSR) and the exothermic preferential oxidation reaction (PROX) occur in the inner tube and the outer tube of the reformer, respectively. The effective heat transfer manner between the inner and outer tubes is investigated. It is verified that the countercurrent-flow type reformer provides the higher hydrogen yield than the cocurrent-flow type. Since the hot spot temperature appears in the outer tube, an improved scheme is proposed to suppress the hot spot temperature by splitting the excess air flowing into two sites. Finally, an optimization algorithm for maximizing the hydrogen yield is employed to determine optimal operating conditions.

Keywords: methanol reformer, methanol steam reforming, optimization, simulation

Procedia PDF Downloads 332
11283 Biocompatibility and Electrochemical Assessment of Biomedical Ti-24Nb-4Zr-8Sn Produced by Spark Plasma Sintering

Authors: Jerman Madonsela, Wallace Matizamhuka, Akiko Yamamoto, Ronald Machaka, Brendon Shongwe

Abstract:

In this study, biocompatibility evaluation of nanostructured near beta Ti-24Nb-4Zr-8Sn (Ti2448) alloy with non-toxic elements produced utilizing Spark plasma sintering (SPS) of very fine microsized powders attained through mechanical alloying was performed. The results were compared with pure titanium and Ti-6Al-4V (Ti64) alloy. Cell proliferation test was performed using murine osteoblastic cells, MC3T3-E1 at two cell densities; 400 and 4000 cells/mL for 7 days incubation. Pure titanium took a lead under both conditions suggesting that the presence of other oxide layers influence cell proliferation. No significant difference in cell proliferation was observed between Ti64 and Ti2448. Potentiodynamic measurement in Hanks, 0.9% NaCl and cell culture medium showed no distinct difference on the anodic polarization curves of the three alloys, indicating that the same anodic reaction occurred on their surface but with different rates. However, Ti2448 showed better corrosion resistance in cell culture medium with a slightly lower corrosion rate of 2.96 nA/cm2 compared to 4.86 nA/cm2 and 5.62 nA/cm2 of Ti and Ti64 respectively. Ti2448 adsorbed less protein as compared to Ti and Ti64 though no notable difference in surface wettability was observed.

Keywords: biocompatibility, osteoblast, corrosion, surface wettability, protein adsorption

Procedia PDF Downloads 222
11282 Unveiling Special Policy Regime, Judgment, and Taylor Rules in Tunisia

Authors: Yosra Baaziz, Moez Labidi

Abstract:

Given limited research on monetary policy rules in revolutionary countries, this paper challenges the suitability of the Taylor rule in characterizing the monetary policy behavior of the Tunisian Central Bank (BCT), especially in turbulent times. More specifically, we investigate the possibility that the Taylor rule should be formulated as a threshold process and examine the validity of such nonlinear Taylor rule as a robust rule for conducting monetary policy in Tunisia. Using quarterly data from 1998:Q4 to 2013:Q4 to analyze the movement of nominal short-term interest rate of the BCT, we find that the nonlinear Taylor rule improves its performance with the advent of special events providing thus a better description of the Tunisian interest rate setting. In particular, our results show that the adoption of an appropriate nonlinear approach leads to a reduction in the errors of 150 basis points in 1999 and 2009, and 60 basis points in 2011, relative to the linear approach.

Keywords: policy rule, central bank, exchange rate, taylor rule, nonlinearity

Procedia PDF Downloads 296