Search results for: broken rate
1870 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20511869 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics
Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim
Abstract:
A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.
Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5851868 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.
Keywords: Computer-aided system, detection, image segmentation, morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5441867 Feasibility Study for a Castor oil Extraction Plant in South Africa
Authors: Mohamed Belaid, Edison Muzenda, Getrude Mitilene, Mansoor Mollagee
Abstract:
A feasibility study for the design and construction of a pilot plant for the extraction of castor oil in South Africa was conducted. The study emphasized the four critical aspects of project feasibility analysis, namely technical, financial, market and managerial aspects. The technical aspect involved research on existing oil extraction technologies, namely: mechanical pressing and solvent extraction, as well as assessment of the proposed production site for both short and long term viability of the project. The site is on the outskirts of Nkomazi village in the Mpumalanga province, where connections for water and electricity are currently underway, potential raw material supply proves to be reliable since the province is known for its commercial farming. The managerial aspect was evaluated based on the fact that the current producer of castor oil will be fully involved in the project while receiving training and technical assistance from Sasol Technology, the TSC and SEDA. Market and financial aspects were evaluated and the project was considered financially viable with a Net Present Value (NPV) of R2 731 687 and an Internal Rate of Return (IRR) of 18% at an annual interest rate of 10.5%. The payback time is 6years for analysis over the first 10 years with a net income of R1 971 000 in the first year. The project was thus found to be feasible with high chance of success while contributing to socio-economic development. It was recommended for lab tests to be conducted to establish process kinetics that would be used in the initial design of the plant.Keywords: Mechanical pressing, Net Present Value, Oilextraction, Project feasibility, Solvent extraction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60821866 Comparison of Different Gas Turbine Inlet Air Cooling Methods
Authors: Ana Paula P. dos Santos, Claudia R. Andrade, Edson L. Zaparoli
Abstract:
Gas turbine air inlet cooling is a useful method for increasing output for regions where significant power demand and highest electricity prices occur during the warm months. Inlet air cooling increases the power output by taking advantage of the gas turbine-s feature of higher mass flow rate when the compressor inlet temperature decreases. Different methods are available for reducing gas turbine inlet temperature. There are two basic systems currently available for inlet cooling. The first and most cost-effective system is evaporative cooling. Evaporative coolers make use of the evaporation of water to reduce the gas turbine-s inlet air temperature. The second system employs various ways to chill the inlet air. In this method, the cooling medium flows through a heat exchanger located in the inlet duct to remove heat from the inlet air. However, the evaporative cooling is limited by wet-bulb temperature while the chilling can cool the inlet air to temperatures that are lower than the wet bulb temperature. In the present work, a thermodynamic model of a gas turbine is built to calculate heat rate, power output and thermal efficiency at different inlet air temperature conditions. Computational results are compared with ISO conditions herein called "base-case". Therefore, the two cooling methods are implemented and solved for different inlet conditions (inlet temperature and relative humidity). Evaporative cooler and absorption chiller systems results show that when the ambient temperature is extremely high with low relative humidity (requiring a large temperature reduction) the chiller is the more suitable cooling solution. The net increment in the power output as a function of the temperature decrease for each cooling method is also obtained.Keywords: Absorption chiller, evaporative cooling, gas turbine, turbine inlet cooling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75521865 Qanat (Subterranean Canal) Role in Traditional Cities and Settlements Formation of Hot-Arid Regions of Iran
Authors: Karim Shiraazi, Mahyar Asheghi Milani, Alireza Sadeghi, Eram Azami, Ahadollah Azami
Abstract:
A passive system "Qanat" is collection of some underground wells. A mother-well was dug in a place far from the city where they could reach to the water table maybe 100 meters underground, they dug other wells to direct water toward the city, with minimum possible gradient. Using the slope of the earth they could bring water close to the surface in the city. The source of water or the appearance of Qanat, land slope and the ownership lines are the important and effective factors in the formation of routes and the segment division of lands to the extent that making use of Qanat as the techniques of extracting underground waters creates a channel of routes with an organic order and hierarchy coinciding the slope of land and it also guides the Qanat waters in the tradition texture of salt desert and border provinces of it. Qanats are excavated in a specified distinction from each other. The quantity of water provided by Qanats depends on the kind of land, distance from mountain, geographical situation of them and the rate of water supply from the underground land. The rate of underground waters, possibility of Qanat excavation, number of Qanats and rate of their water supply from one hand and the quantity of cultivable fertile lands from the other hand are the important natural factors making the size of cities. In the same manner the cities with several Qanats have multi central textures. The location of cities is in direct relation with land quality, soil fertility and possibility of using underground water by excavating Qanats. Observing the allowable distance for Qanat watering is a determining factor for distance between villages and cities. Topography, land slope, soil quality, watering system, ownership, kind of cultivation, etc. are the effective factors in directing Qanats for excavation and guiding water toward the cultivable lands and it also causes the formation of different textures in land division of farming provinces. Several divisions such as orderly and wide, inorderly, thin and long, comb like, etc. are the introduction to organic order. And at the same time they are complete coincidence with environmental conditions in the typical development of ecological architecture and planning in the traditional cities and settlements order.Keywords: Qanat, Settlement Formation, Hot-Arid Region, Sustainable Development
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19201864 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.Keywords: Adult, basal metabolic rate, FAO/WHO/UNU, obesity, prediction equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10101863 Effect of Reynolds Number and Concentration of Biopolymer (Gum Arabic) on Drag Reduction of Turbulent Flow in Circular Pipe
Authors: Kamaljit Singh Sokhal, Gangacharyulu Dasoraju, Vijaya Kumar Bulasara
Abstract:
Biopolymers are popular in many areas, like petrochemicals, food industry and agriculture due to their favorable properties like environment-friendly, availability, and cost. In this study, a biopolymer gum Arabic was used to find its effect on the pressure drop at various concentrations (100 ppm – 300 ppm) with various Reynolds numbers (10000 – 45000). A rheological study was also done by using the same concentrations to find the effect of the shear rate on the shear viscosity. Experiments were performed to find the effect of injection of gum Arabic directly near the boundary layer and to investigate its effect on the maximum possible drag reduction. Experiments were performed on a test section having i.d of 19.50 mm and length of 3045 mm. The polymer solution was injected from the top of the test section by using a peristaltic pump. The concentration of the polymer solution and the Reynolds number were used as parameters to get maximum possible drag reduction. Water was circulated through a centrifugal pump having a maximum 3000 rpm and the flow rate was measured by using rotameter. Results were validated by using Virk's maximum drag reduction asymptote. A maximum drag reduction of 62.15% was observed with the maximum concentration of gum Arabic, 300 ppm. The solution was circulated in the closed loop to find the effect of degradation of polymers with a number of cycles on the drag reduction percentage. It was observed that the injection of the polymer solution in the boundary layer was showing better results than premixed solutions.
Keywords: Drag reduction, shear viscosity, gum Arabic, injection point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7431862 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.
Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5941861 Catalytic Decomposition of Potassium Monopersulfate. The Kinetics
Authors: Olga Gimeno, Javier Rivas, Maria Carbajo, Teresa Borralho
Abstract:
Potassium monopersulfate has been decomposed in aqueous solution in the presence of Co(II). The process has been simulated by means of a mechanism based on elementary reactions. Rate constants have been taken from literature reports or, alternatively, assimilated to analogous reactions occurring in Fenton's chemistry. Several operating conditions have been successfully applied.
Keywords: Monopersulfate, Oxone®, Sulfate radicals, Water treatment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19701860 Estimation of Bio-Kinetic Coefficients for Treatment of Brewery Wastewater
Authors: Abimbola M. Enitan, Josiah Adeyemo
Abstract:
Anaerobic modeling is a useful tool to describe and simulate the condition and behaviour of anaerobic treatment units for better effluent quality and biogas generation. The present investigation deals with the anaerobic treatment of brewery wastewater with varying organic loads. The chemical oxygen demand (COD) and total suspended solids (TSS) of the influent and effluent of the bioreactor were determined at various retention times to generate data for kinetic coefficients. The bio-kinetic coefficients in the modified Stover–Kincannon kinetic and methane generation models were determined to study the performance of anaerobic digestion process. At steady-state, the determination of the kinetic coefficient (K), the endogenous decay coefficient (Kd), the maximum growth rate of microorganisms (μmax), the growth yield coefficient (Y), ultimate methane yield (Bo), maximum utilization rate constant Umax and the saturation constant (KB) in the model were calculated to be 0.046 g/g COD, 0.083 (d¯¹), 0.117 (d-¹), 0.357 g/g, 0.516 (L CH4/gCODadded), 18.51 (g/L/day) and 13.64 (g/L/day) respectively. The outcome of this study will help in simulation of anaerobic model to predict usable methane and good effluent quality during the treatment of industrial wastewater. Thus, this will protect the environment, conserve natural resources, saves time and reduce cost incur by the industries for the discharge of untreated or partially treated wastewater. It will also contribute to a sustainable long-term clean development mechanism for the optimization of the methane produced from anaerobic degradation of waste in a close system.
Keywords: Brewery wastewater, methane generation model, environment, anaerobic modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42071859 Performance of BLDC Motor under Kalman Filter Sensorless Drive
Authors: Yuri Boiko, Ci Lin, Iluju Kiringa, Tet Yeap
Abstract:
The performance of a permanent magnet brushless direct current (BLDC) motor controlled by the Kalman filter based position-sensorless drive is studied in terms of its dependence from the system’s parameters variations. The effects of the system’s parameters changes on the dynamic behavior of state variables are verified. Simulated is the closed loop control scheme with Kalman filter in the feedback line. Distinguished are two separate data sampling modes in analyzing feedback output from the BLDC motor: (1) equal angular separation and (2) equal time intervals. In case (1), the data are collected via equal intervals of rotor’s angular position i, i.e. keeping = const. In case (2), the data collection time points ti are separated by equal sampling time intervals t = const. Demonstrated are the effects of the parameters changes on the sensorless control flow, in particular, reduction of the instability torque ripples, switching spikes, and torque load balancing. It is specifically shown that an efficient suppression of commutation induced instability torque ripples is an achievable selection of the sampling rate in the Kalman filter settings above a certain critical value. The computational cost of such suppression is shown to be higher for the motors with lower induction values of the windings.
Keywords: BLDC motor, Kalman filter, sensorless drive, state variables, instability torque ripples reduction, sampling rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7301858 Analysis of Climatic Strategies in Designing the Residential Buildings in Cold Dry Climate of Tabriz Metropolis to Reduce Air Pollution in Urban Environment
Authors: Shahryar Shaghaghi G., Paria Violette Shakiba , Gholamreza Irani
Abstract:
Nowadays, the earth is countered with serious problem of air pollution. This problem has been started from the industrial revolution and has been faster in recent years, so that leads the earth to ecological and environmental disaster. One of its results is the global warming problem and its related increase in global temperature. The most important factors in air pollution especially in urban environments are Automobiles and residential buildings that are the biggest consumers of the fossil energies, so that if the residential buildings as a big part of the consumers of such energies reduce their consumption rate, the air pollution will be decreased. Since Metropolises are the main centers of air pollution in the world, assessment and analysis of efficient strategies in decreasing air pollution in such cities, can lead to the desirable and suitable results and can solve the problem at least in critical level. Tabriz city is one of the most important metropolises in North west of Iran that about two million people are living there. for its situation in cold dry climate, has a high rate of fossil energies consumption that make air pollution in its urban environment. These two factors, being both metropolis and in cold dry climate, make this article try to analyze the strategies of climatic design in old districts of the city and use them in new districts of the future. These strategies can be used in this city and other similar cities and pave the way to reduce energy consumption and related air pollution to save whole world.Keywords: Air pollution, Urban Environment, Metropolis, Residential building, Fossil energies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17851857 The DAQ Debugger for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8931856 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry
Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić
Abstract:
In this work the concentration of steepwater from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steepwater by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentate were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steepwater, which were obtained from the starch wet milling plant „Jabuka“ Pancevo. The procedure of ultrafiltration on a single-channel 250mm lenght, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steepwater ultrafiltration, the change of permeate flux, dry matter content of permeate and retentate, as well as the absorbance changes of the permeate and retentate were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentate is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steepwater ultrafiltration in permeate stays 40% less dry matter compared to the feed.
Keywords: Ultrafiltration, steepwater, starch industry, ceramic membrane.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21361855 Development of Affordable and Reliable Diagnostic Tools to Record Vital Parameters for Improving Health Care in Low Resources Settings
Authors: Mannan Mridha, Usama Gazay, Kosovare V. Aslani, Hugo Linder, Alice Ravizza, Carmelo de Maria
Abstract:
In most developing countries, although the vast majority of the people are living in the rural areas, the qualified medical doctors are not available there. Health care workers and paramedics, called village doctors, informal healthcare providers, are largely responsible for the rural medical care. Mishaps due to wrong diagnosis and inappropriate medication have been causing serious suffering that is preventable. While innovators have created many devices, the vast majority of these technologies do not find applications to address the needs and conditions in low-resource settings. The primary motive is to address the acute lack of affordable medical technologies for the poor people in low-resource settings. A low cost smart medical device that is portable, battery operated and can be used at any point of care has been developed to detect breathing rate, electrocardiogram (ECG) and arterial pulse rate to improve diagnosis and monitoring of patients and thus improve care and safety. This simple and easy to use smart medical device can be used, managed and maintained effectively and safely by any health worker with some training. In order to empower the health workers and village doctors, our device is being further developed to integrate with ICT tools like smart phones and connect to the medical experts wherever available, to manage the serious health problems.
Keywords: Healthcare for low resources settings, health awareness education, improve patient care and safety, smart and affordable medical device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8551854 Integrating Geographic Information into Diabetes Disease Management
Authors: Tsu-Yun Chiu, Tsung-Hsueh Lu, Tain-Junn Cheng
Abstract:
Background: Traditional chronic disease management did not pay attention to effects of geographic factors on the compliance of treatment regime, which resulted in geographic inequality in outcomes of chronic disease management. This study aims to examine the geographic distribution and clustering of quality indicators of diabetes care. Method: We first extracted address, demographic information and quality of care indicators (number of visits, complications, prescription and laboratory records) of patients with diabetes for 2014 from medical information system in a medical center in Tainan City, Taiwan, and the patients’ addresses were transformed into district- and village-level data. We then compared the differences of geographic distribution and clustering of quality of care indicators between districts and villages. Despite the descriptive results, rate ratios and 95% confidence intervals (CI) were estimated for indices of care in order to compare the quality of diabetes care among different areas. Results: A total of 23,588 patients with diabetes were extracted from the hospital data system; whereas 12,716 patients’ information and medical records were included to the following analysis. More than half of the subjects in this study were male and between 60-79 years old. Furthermore, the quality of diabetes care did indeed vary by geographical levels. Thru the smaller level, we could point out clustered areas more specifically. Fuguo Village (of Yongkang District) and Zhiyi Village (of Sinhua District) were found to be “hotspots” for nephropathy and cerebrovascular disease; while Wangliau Village and Erwang Village (of Yongkang District) would be “coldspots” for lowest proportion of ≥80% compliance to blood lipids examination. On the other hand, Yuping Village (in Anping District) was the area with the lowest proportion of ≥80% compliance to all laboratory examination. Conclusion: In spite of examining the geographic distribution, calculating rate ratios and their 95% CI could also be a useful and consistent method to test the association. This information is useful for health planners, diabetes case managers and other affiliate practitioners to organize care resources to the areas most needed.
Keywords: Geocoding, chronic disease management, quality of diabetes care, rate ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9971853 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation
Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez
Abstract:
With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).Keywords: Component carrier, carrier aggregation, LTE-Advanced, scheduling, spectrum management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5631852 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21301851 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7581850 Biomethanation of Palm Oil Mill Effluent (POME) by Membrane Anaerobic System (MAS) using POME as a Substrate
Authors: N.H. Abdurahman, Y. M. Rosli, N. H. Azhari, S. F. Tam
Abstract:
The direct discharge of palm oil mill effluent (POME) wastewater causes serious environmental pollution due to its high chemical oxygen demand (COD) and biochemical oxygen demand (BOD). Traditional ways for POME treatment have both economical and environmental disadvantages. In this study, a membrane anaerobic system (MAS) was used as an alternative, cost effective method for treating POME. Six steady states were attained as a part of a kinetic study that considered concentration ranges of 8,220 to 15,400 mg/l for mixed liquor suspended solids (MLSS) and 6,329 to 13,244 mg/l for mixed liquor volatile suspended solids (MLVSS). Kinetic equations from Monod, Contois and Chen & Hashimoto were employed to describe the kinetics of POME treatment at organic loading rates ranging from 2 to 13 kg COD/m3/d. throughout the experiment, the removal efficiency of COD was from 94.8 to 96.5% with hydraulic retention time, HRT from 400.6 to 5.7 days. The growth yield coefficient, Y was found to be 0.62gVSS/g COD the specific microorganism decay rate was 0.21 d-1 and the methane gas yield production rate was between 0.25 l/g COD/d and 0.58 l/g COD/d. Steady state influent COD concentrations increased from 18,302 mg/l in the first steady state to 43,500 mg/l in the sixth steady state. The minimum solids retention time, which was obtained from the three kinetic models ranged from 5 to 12.3 days. The k values were in the range of 0.35 – 0.519 g COD/ g VSS • d and values were between 0.26 and 0.379 d-1. The solids retention time (SRT) decreased from 800 days to 11.6 days. The complete treatment reduced the COD content to 2279 mg/l equivalent to a reduction of 94.8% reduction from the original.
Keywords: COD reduction, POME, kinetics, membrane, anaerobic, monod, contois equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25671849 Perceptual Framework for a Modern Left-Turn Collision Warning System
Authors: E. Dabbour, S. M. Easa
Abstract:
Most of the collision warning systems currently available in the automotive market are mainly designed to warn against imminent rear-end and lane-changing collisions. No collision warning system is commercially available to warn against imminent turning collisions at intersections, especially for left-turn collisions when a driver attempts to make a left-turn at either a signalized or non-signalized intersection, conflicting with the path of other approaching vehicles traveling on the opposite-direction traffic stream. One of the major factors that lead to left-turn collisions is the human error and misjudgment of the driver of the turning vehicle when perceiving the speed and acceleration of other vehicles traveling on the opposite-direction traffic stream; therefore, using a properly-designed collision warning system will likely reduce, or even eliminate, this type of collisions by reducing human error. This paper introduces perceptual framework for a proposed collision warning system that can detect imminent left-turn collisions at intersections. The system utilizes a commercially-available detection sensor (either a radar sensor or a laser detector) to detect approaching vehicles traveling on the opposite-direction traffic stream and calculate their speeds and acceleration rates to estimate the time-tocollision and compare that time to the time required for the turning vehicle to clear the intersection. When calculating the time required for the turning vehicle to clear the intersection, consideration is given to the perception-reaction time of the driver of the turning vehicle, which is the time required by the driver to perceive the message given by the warning system and react to it by engaging the throttle. A regression model was developed to estimate perception-reaction time based on age and gender of the driver of the host vehicle. Desired acceleration rate selected by the driver of the turning vehicle, when making the left-turn movement, is another human factor that is considered by the system. Another regression model was developed to estimate the acceleration rate selected by the driver of the turning vehicle based on driver-s age and gender as well as on the location and speed of the nearest approaching vehicle along with the maximum acceleration rate provided by the mechanical characteristics of the turning vehicle. By comparing time-to-collision with the time required for the turning vehicle to clear the intersection, the system displays a message to the driver of the turning vehicle when departure is safe. An application example is provided to illustrate the logic algorithm of the proposed system.Keywords: Collision warning systems, intelligent transportationsystems, vehicle safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20551848 Cubic Splines and Fourier Series Approach to Study Temperature Variation in Dermal Layers of Elliptical Shaped Human Limbs
Authors: Mamta Agrawal, Neeru Adlakha, K.R. Pardasani
Abstract:
An attempt has been made to develop a seminumerical model to study temperature variations in dermal layers of human limbs. The model has been developed for two dimensional steady state case. The human limb has been assumed to have elliptical cross section. The dermal region has been divided into three natural layers namely epidermis, dermis and subdermal tissues. The model incorporates the effect of important physiological parameters like blood mass flow rate, metabolic heat generation, and thermal conductivity of the tissues. The outer surface of the limb is exposed to the environment and it is assumed that heat loss takes place at the outer surface by conduction, convection, radiation, and evaporation. The temperature of inner core of the limb also varies at the lower atmospheric temperature. Appropriate boundary conditions have been framed based on the physical conditions of the problem. Cubic splines approach has been employed along radial direction and Fourier series along angular direction to obtain the solution. The numerical results have been computed for different values of eccentricity resembling with the elliptic cross section of the human limbs. The numerical results have been used to obtain the temperature profile and to study the relationships among the various physiological parameters.Keywords: Blood Mass Flow Rate, Metabolic Heat Generation, Fourier Series, Cubic splines and Thermal Conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18001847 Analysis of Supply Side Factors Affecting Bank Financing of Non-Oil Exports in Nigeria
Authors: Sama’ila Idi Ningi, Abubakar Yusuf Dutse
Abstract:
The banking sector poses a lot of problems in Nigeria in general and the non-oil export sector in particular. The banks' lack effectiveness in handling small, medium or long-term credit risk (lack of training of loan officers, lack of information on borrowers and absence of a reliable credit registry) results in non-oil exporters being burdened with high requirements, such as up to three years of financial statements, enough collateral to cover both the loan principal and interest (including a cash deposit that may be up to 30% of the loans' net present value), and to provide every detail of the international trade transaction in question. The stated problems triggered this research. Consequently, information on bank financing of non-oil exports was collected from 100 respondents from the 20 Deposit Money Banks (DMBs) in Nigeria. The data was analysed by the use of descriptive statistics correlation and regression. It is found that, Nigerian banks are participants in the financing of non-oil exports. Despite their participation, the rate of interest for credit extended to non-oil export is usually high, ranging between 15-20%. Small and medium sized non-oil export businesses lack the credit history for banks to judge them as reputable. Banks also consider the non-oil export sector very risky for investment. The banks actually do grant less credit than the exporters may require and therefore are not properly funded by banks. Banks grant very low volume of foreign currency loan in addition to, unfavorable exchange rate at which Naira is exchanged to the Dollar and other currencies in the country. This makes importation of inputs costly and negatively impacted on the non-oil export performance in Nigeria.
Keywords: Supply Side Factors, Bank Financing, Non-Oil Exports.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27111846 Effect of Segregation on the Reaction Rate of Sewage Sludge Pyrolysis in a Bubbling Fluidized Bed
Authors: A. Soria-Verdugo, A. Morato-Godino, L. M. García-Gutiérrez, N. García-Hernando
Abstract:
The evolution of the pyrolysis of sewage sludge in a fixed and a fluidized bed was analyzed using a novel measuring technique. This original measuring technique consists of installing the whole reactor over a precision scale, capable of measuring the mass of the complete reactor with enough precision to detect the mass released by the sewage sludge sample during its pyrolysis. The inert conditions required for the pyrolysis process were obtained supplying the bed with a nitrogen flowrate, and the bed temperature was adjusted to either 500 ºC or 600 ºC using a group of three electric resistors. The sewage sludge sample was supplied through the top of the bed in a batch of 10 g. The measurement of the mass released by the sewage sludge sample was employed to determine the evolution of the reaction rate during the pyrolysis, the total amount of volatile matter released, and the pyrolysis time. The pyrolysis tests of sewage sludge in the fluidized bed were conducted using two different bed materials of the same size but different densities: silica sand and sepiolite particles. The higher density of silica sand particles induces a flotsam behavior for the sewage sludge particles which move close to the bed surface. In contrast, the lower density of sepiolite produces a neutrally-buoyant behavior for the sewage sludge particles, which shows a proper circulation throughout the whole bed in this case. The analysis of the evolution of the pyrolysis process in both fluidized beds show that the pyrolysis is faster when buoyancy effects are negligible, i.e. in the bed conformed by sepiolite particles. Moreover, sepiolite was found to show an absorbent capability for the volatile matter released during the pyrolysis of sewage sludge.
Keywords: Bubbling fluidized bed, pyrolysis time, segregation effects, sewage sludge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11281845 A CFD Study of Turbulent Convective Heat Transfer Enhancement in Circular Pipeflow
Authors: Perumal Kumar, Rajamohan Ganesan
Abstract:
Addition of milli or micro sized particles to the heat transfer fluid is one of the many techniques employed for improving heat transfer rate. Though this looks simple, this method has practical problems such as high pressure loss, clogging and erosion of the material of construction. These problems can be overcome by using nanofluids, which is a dispersion of nanosized particles in a base fluid. Nanoparticles increase the thermal conductivity of the base fluid manifold which in turn increases the heat transfer rate. Nanoparticles also increase the viscosity of the basefluid resulting in higher pressure drop for the nanofluid compared to the base fluid. So it is imperative that the Reynolds number (Re) and the volume fraction have to be optimum for better thermal hydraulic effectiveness. In this work, the heat transfer enhancement using aluminium oxide nanofluid using low and high volume fraction nanofluids in turbulent pipe flow with constant wall temperature has been studied by computational fluid dynamic modeling of the nanofluid flow adopting the single phase approach. Nanofluid, up till a volume fraction of 1% is found to be an effective heat transfer enhancement technique. The Nusselt number (Nu) and friction factor predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%) agree very well with the experimental values of Sundar and Sharma (2010). While, predictions for the high volume fraction nanofluids (i.e. 1%, 4% and 6%) are found to have reasonable agreement with both experimental and numerical results available in the literature. So the computationally inexpensive single phase approach can be used for heat transfer and pressure drop prediction of new nanofluids.Keywords: Heat transfer intensification, nanofluid, CFD, friction factor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28751844 Computational Methods in Official Statistics with an Example on Calculating and Predicting Diabetes Mellitus [DM] Prevalence in Different Age Groups within Australia in Future Years, in Light of the Aging Population
Authors: D. Hilton
Abstract:
An analysis of the Australian Diabetes Screening Study estimated undiagnosed diabetes mellitus [DM] prevalence in a high risk general practice based cohort. DM prevalence varied from 9.4% to 18.1% depending upon the diagnostic criteria utilised with age being a highly significant risk factor. Utilising the gold standard oral glucose tolerance test, the prevalence of DM was 22-23% in those aged >= 70 years and <15% in those aged 40-59 years. Opportunistic screening in Australian general practice potentially can identify many persons with undiagnosed type 2 DM. An Australian Bureau of Statistics document published three years ago, reported the highest rate of DM in men aged 65-74 years [19%] whereas the rate for women was highest in those over 75 years [13%]. If you consider that the Australian Bureau of Statistics report in 2007 found that 13% of the population was over 65 years of age and that this will increase to 23-25% by 2056 with a further projected increase to 25-28% by 2101, obviously this information has to be factored into the equation when age related diabetes prevalence predictions are calculated. This 10-15% proportional increase of elderly persons within the population demographics has dramatic implications for the estimated number of elderly persons with DM in these age groupings. Computational methodology showing the age related demographic changes reported in these official statistical documents will be done showing estimates for 2056 and 2101 for different age groups. This has relevance for future diabetes prevalence rates and shows that along with many countries worldwide Australia is facing an increasing pandemic. In contrast Japan is expected to have a decrease in the next twenty years in the number of persons with diabetes.
Keywords: Epidemiological methods, aging, prevalence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19541843 Instability of Soliton Solutions to the Schamel-nonlinear Schrödinger Equation
Authors: Sarun Phibanchon, Michael A. Allen
Abstract:
A variational method is used to obtain the growth rate of a transverse long-wavelength perturbation applied to the soliton solution of a nonlinear Schr¨odinger equation with a three-half order potential. We demonstrate numerically that this unstable perturbed soliton will eventually transform into a cylindrical soliton.
Keywords: Soliton, instability, variational method, spectral method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37011842 Women's Employment Issues in Georgia and Solutions Based on European Experience
Authors: N. Damenia, E. Kharaishvili, N. Sagareishvili, M. Saghareishvili
Abstract:
Women's Employment is one of the most important issues in the global economy. The article discusses the stated topic in Georgia, through historical content, Soviet experience, and modern perspectives. The paper discusses segmentation insa terms of employment and related problems. Based on statistical analysis, women's unemployment rate and its factors are analyzed. The level of employment of women in Transcaucasia (Georgia, Armenia, and Azerbaijan) is discussed and is compared with Baltic countries (Lithuania, Latvia, and Estonia). The study analyzes women’s level of development, according to the average age of marriage and migration level. The focus is on Georgia's Association Agreement with the EU in 2014, which includes economic, social, trade and political issues. One part of it is gender equality at workplaces. According to the research, the average monthly remuneration of women managers in the financial and insurance sector equaled to 1044.6 Georgian Lari, while in overall business sector average monthly remuneration equaled to 961.1 GEL. Average salaries are increasing; however, the employment rate remains problematic. For example, in 2017, 74.6% of men and 50.8% of women were employed from a total workforce. It is also interesting that the proportion of men and women at managerial positions is 29% (women) to 71% (men). Based on the results, the main recommendation for government and civil society is to consider women as a part of the country’s economic development. In this aspect, the experience of developed countries should be considered. It is important to create additional jobs in urban or rural areas and help migrant women return and use their working resources properly.
Keywords: Employment of women, segregation in terms of employment, women's employment level in Transcaucasia, migration level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7251841 Performance Evaluation of Filtration System for Groundwater Recharging Well in the Presence of Medium Sand-Mixed Storm Water
Authors: Krishna Kumar Singh, Praveen Jain
Abstract:
Collection of storm water runoff and forcing it into the groundwater is the need of the hour to sustain the ground water table. However, the runoff entraps various types of sediments and other floating objects whose removal are essential to avoid pollution of ground water and blocking of pores of aquifer. However, it requires regular cleaning and maintenance due to problem of clogging. To evaluate the performance of filter system consisting of coarse sand (CS), gravel (G) and pebble (P) layers, a laboratory experiment was conducted in a rectangular column. The effect of variable thickness of CS, G and P layers of the filtration unit of the recharge shaft on the recharge rate and the sediment concentration of effluent water were evaluated. Medium sand (MS) of three particle sizes, viz. 0.150–0.300 mm (T1), 0.300–0.425 mm (T2) and 0.425–0.600 mm of thickness 25 cm, 30 cm and 35 cm respectively in the top layer of the filter system and having seven influent sediment concentrations of 250–3,000 mg/l were used for experimental study. The performance was evaluated in terms of recharge rates and clogging time. The results indicated that 100 % suspended solids were entrapped in the upper 10 cm layer of MS, the recharge rates declined sharply for influent concentrations of more than 1,000 mg/l. All treatments with higher thickness of MS media indicated recharge rate slightly more than that of all treatment with lower thickness of MS media respectively. The performance of storm water infiltration systems was highly dependent on the formation of a clogging layer at the filter. An empirical relationship has been derived between recharge rates, inflow sediment load, size of MS and thickness of MS with using MLR.
Keywords: Groundwater, medium sand-mixed storm water filter, inflow sediment load.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2282