Search results for: HEPA filters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 335

Search results for: HEPA filters

275 An Algorithm for Removal of Noise from X-Ray Images

Authors: Sajidullah Khan, Najeeb Ullah, Wang Yin Chai, Chai Soo See

Abstract:

In this paper, we propose an approach to remove impulse and Poisson noise from X-ray images. Many filters have been used for impulse noise removal from color and gray scale images with their own strengths and weaknesses but X-ray images contain Poisson noise and unfortunately there is no intelligent filter which can detect impulse and Poisson noise from X-ray images. Our proposed filter uses the upgraded layer discrimination approach to detect both Impulse and Poisson noise corrupted pixels in X-ray images and then restores only those detected pixels with a simple efficient and reliable one line equation. Our Proposed algorithms are very effective and much more efficient than all existing filters used only for Impulse noise removal. The proposed method uses a new powerful and efficient noise detection method to determine whether the pixel under observation is corrupted or noise free. Results from computer simulations are used to demonstrate pleasing performance of our proposed method.

Keywords: X-ray image de-noising, impulse noise, poisson noise, PRWF

Procedia PDF Downloads 383
274 Enhancing Water Purification with Angiosperm Xylem Filters

Authors: Yinan Zhou

Abstract:

One in four people in the world still lack access to clean drinking water, and there is a current lack of cost-effective ways for water-scarce regions to access it. This study seeks to investigate the solutions to water filtration in rural China as well as test the feasibility of using angiosperms as xylem candidates. Four angiosperms that are found in China and around Asia were subject to three tests to test their filtration capacity: ink water filtration, creek water filtration, and microparticle filtration. Analysis of the experiments demonstrated that Celtis Sinensis was able to produce one of the clearest solutions, filter out large debris and bacteria, and reject microparticles almost completely. Celtis Sinensis proves that angiosperm xylem filters are also competent filter candidates and, due to their availability in China, can be used as a nearby source of water filtration. Further research should be done on scaling production to a larger scale and also on the filtration of viruses.

Keywords: xylem filter, water quality, China, angiosperms, bacteria

Procedia PDF Downloads 11
273 Is the Okun's Law Valid in Tunisia?

Authors: El Andari Chifaa, Bouaziz Rached

Abstract:

The central focus of this paper was to check whether the Okun’s law in Tunisia is valid or not. For this purpose, we have used quarterly time series data during the period 1990Q1-2014Q1. Firstly, we applied the error correction model instead of the difference version of Okun's Law, the Engle-Granger and Johansen test are employed to find out long run association between unemployment, production, and how error correction mechanism (ECM) is used for short run dynamic. Secondly, we used the gap version of Okun’s law where the estimation is done from three band pass filters which are mathematical tools used in macro-economic and especially in business cycles theory. The finding of the study indicates that the inverse relationship between unemployment and output is verified in the short and long term, and the Okun's law holds for the Tunisian economy, but with an Okun’s coefficient lower than required. Therefore, our empirical results have important implications for structural and cyclical policymakers in Tunisia to promote economic growth in a context of lower unemployment growth.

Keywords: Okun’s law, validity, unit root, cointegration, error correction model, bandpass filters

Procedia PDF Downloads 317
272 Standardized Testing of Filter Systems regarding Their Separation Efficiency in Terms of Allergenic Particles and Airborne Germs

Authors: Johannes Mertl

Abstract:

Our surrounding air contains various particles. Besides typical representatives of inorganic dust, such as soot and ash, also particles originating from animals, microorganisms or plants are floating through the air, so-called bioaerosols. The group of bioaerosols consists of a broad spectrum of particles of different size, including fungi, bacteria, viruses, spores, or tree, flower and grass pollen that are of high relevance for allergy sufferers. In dependence of the environmental climate and the actual season, these allergenic particles can be found in enormous numbers in the air and are inhaled by humans via the respiration tract, with a potential for inflammatory diseases of the airways, such as asthma or allergic rhinitis. As a consequence air filter systems of ventilation and air conditioning devices are required to meet very high standards to prevent, or at least lower the number of allergens and airborne germs entering the indoor air. Still, filter systems are merely classified for their separation rates using well-defined mineral test dust, while no appropriate sufficiently standardized test methods for bioaerosols exist. However, determined separation rates for mineral test particles of a certain size cannot simply be transferred to bioaerosols, as separation efficiency of particularly fine and respirable particles (< 10 microns) is dependent not only on their shape and particle diameter, but also defined by their density and physicochemical properties. For this reason, the OFI developed a test method, which directly enables a testing of filters and filter media for their separation rates on bioaerosols, as well as a classification of filters. Besides allergens from an intact or fractured tree or grass pollen, allergenic proteins bound to particulates, as well as allergenic fungal spores (e.g. Cladosporium cladosporioides), or bacteria can be used to classify filters regarding their separation rates. Allergens passing through the filter can then be detected by highly sensitive immunological assays (ELISA) or in the case of fungal spores by microbiological methods, which allow for the detection of even one single spore passing the filter. The test procedure, which is carried out in laboratory scale, was furthermore validated regarding its sufficiency to cover real life situations by upscaling using air conditioning devices showing great conformity in terms of separation rates. Additionally, a clinical study with allergy sufferers was performed to verify analytical results. Several different air conditioning filters from the car industry have been tested, showing significant differences in their separation rates.

Keywords: airborne germs, allergens, classification of filters, fine dust

Procedia PDF Downloads 256
271 Frequency Transformation with Pascal Matrix Equations

Authors: Phuoc Si Nguyen

Abstract:

Frequency transformation with Pascal matrix equations is a method for transforming an electronic filter (analogue or digital) into another filter. The technique is based on frequency transformation in the s-domain, bilinear z-transform with pre-warping frequency, inverse bilinear transformation and a very useful application of the Pascal’s triangle that simplifies computing and enables calculation by hand when transforming from one filter to another. This paper will introduce two methods to transform a filter into a digital filter: frequency transformation from the s-domain into the z-domain; and frequency transformation in the z-domain. Further, two Pascal matrix equations are derived: an analogue to digital filter Pascal matrix equation and a digital to digital filter Pascal matrix equation. These are used to design a desired digital filter from a given filter.

Keywords: frequency transformation, bilinear z-transformation, pre-warping frequency, digital filters, analog filters, pascal’s triangle

Procedia PDF Downloads 549
270 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 80
269 Harmonic Assessment and Mitigation in Medical Diagonesis Equipment

Authors: S. S. Adamu, H. S. Muhammad, D. S. Shuaibu

Abstract:

Poor power quality in electrical power systems can lead to medical equipment at healthcare centres to malfunction and present wrong medical diagnosis. Equipment such as X-rays, computerized axial tomography, etc. can pollute the system due to their high level of harmonics production, which may cause a number of undesirable effects like heating, equipment damages and electromagnetic interferences. The conventional approach of mitigation uses passive inductor/capacitor (LC) filters, which has some drawbacks such as, large sizes, resonance problems and fixed compensation behaviours. The current trends of solutions generally employ active power filters using suitable control algorithms. This work focuses on assessing the level of Total Harmonic Distortion (THD) on medical facilities and various ways of mitigation, using radiology unit of an existing hospital as a case study. The measurement of the harmonics is conducted with a power quality analyzer at the point of common coupling (PCC). The levels of measured THD are found to be higher than the IEEE 519-1992 standard limits. The system is then modelled as a harmonic current source using MATLAB/SIMULINK. To mitigate the unwanted harmonic currents a shunt active filter is developed using synchronous detection algorithm to extract the fundamental component of the source currents. Fuzzy logic controller is then developed to control the filter. The THD without the active power filter are validated using the measured values. The THD with the developed filter show that the harmonics are now within the recommended limits.

Keywords: power quality, total harmonics distortion, shunt active filters, fuzzy logic

Procedia PDF Downloads 479
268 Operation Parameters of Vacuum Cleaned Filters

Authors: Wilhelm Hoeflinger, Thomas Laminger, Johannes Wolfslehner

Abstract:

For vacuum cleaned dust filters, used e. g. in textile industry, there exist no calculation methods to determine design parameters (e. g. traverse speed of the nozzle, filter area...). In this work a method to calculate the optimum traverse speed of the nozzle of an industrial-size flat dust filter at a given mean pressure drop and filter face velocity was elaborated. Well-known equations for the design of a cleanable multi-chamber bag-house-filter were modified in order to take into account a continuously regeneration of a dust filter by a nozzle. Thereby, the specific filter medium resistance and the specific cake resistance values are needed which can be derived from filter tests under constant operation conditions. A lab-scale filter test rig was used to derive the specific filter media resistance value and the specific cake resistance value for vacuum cleaned filter operation. Three different filter media were tested and the determined parameters were compared to each other.

Keywords: design of dust filter, dust removing, filter regeneration, operation parameters

Procedia PDF Downloads 388
267 Human Lens Metabolome: A Combined LC-MS and NMR Study

Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich

Abstract:

Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.

Keywords: cataract, lens, NMR, LC-MS, metabolome

Procedia PDF Downloads 324
266 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 367
265 Foggy Image Restoration Using Neural Network

Authors: Khader S. Al-Aidmat, Venus W. Samawi

Abstract:

Blurred vision in the misty atmosphere is essential problem which needs to be resolved. To solve this problem, we developed a technique to restore foggy degraded image from its original version using Back-propagation neural network (BP-NN). The suggested technique is based on mapping between foggy scene and its corresponding original scene. Seven different approaches are suggested based on type of features used in image restoration. Features are extracted from spatial and spatial-frequency domain (using DCT). Each of these approaches comes with its own BP-NN architecture depending on type and number of used features. The weight matrix resulted from training each BP-NN represents a fog filter. The performance of these filters are evaluated empirically (using PSNR), and perceptually. By comparing the performance of these filters, the effective features that suits BP-NN technique for restoring foggy images is recognized. This system proved its effectiveness and success in restoring moderate foggy images.

Keywords: artificial neural network, discrete cosine transform, feed forward neural network, foggy image restoration

Procedia PDF Downloads 384
264 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 325
263 Software Verification of Systematic Resampling for Optimization of Particle Filters

Authors: Osiris Terry, Kenneth Hopkinson, Laura Humphrey

Abstract:

Systematic resampling is the most popularly used resampling method in particle filters. This paper seeks to further the understanding of systematic resampling by defining a formula made up of variables from the sampling equation and the particle weights. The formula is then verified via SPARK, a software verification language. The verified systematic resampling formula states that the minimum/maximum number of possible samples taken of a particle is equal to the floor/ceiling value of particle weight divided by the sampling interval, respectively. This allows for the creation of a randomness spectrum that each resampling method can fall within. Methods on the lower end, e.g., systematic resampling, have less randomness and, thus, are quicker to reach an estimate. Although lower randomness allows for error by having a larger bias towards the size of the weight, having this bias creates vulnerabilities to the noise in the environment, e.g., jamming. Conclusively, this is the first step in characterizing each resampling method. This will allow target-tracking engineers to pick the best resampling method for their environment instead of choosing the most popularly used one.

Keywords: SPARK, software verification, resampling, systematic resampling, particle filter, tracking

Procedia PDF Downloads 84
262 A Fast Convergence Subband BSS Structure

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 555
261 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 76
260 A Subband BSS Structure with Reduced Complexity and Fast Convergence

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method, we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work, the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each subband than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 580
259 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs

Authors: Muhammad Yasir Wadood, Fatemeh Babaeian

Abstract:

By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.

Keywords: band-pass filters, inter-digital filter, microstrip, via-less

Procedia PDF Downloads 157
258 Impact of Soot on NH3-SCR, NH3 Oxidation and NH3 TPD over Cu/SSZ-13 Zeolite

Authors: Lidija Trandafilovic, Kirsten Leistner, Marie Stenfeldt, Louise Olsson

Abstract:

Ammonia Selective Catalytic Reduction (NH3 SCR), is one of the most efficient post combustion abatement technologies for removing NOx from diesel engines. In order to remove soot, diesel particulate filters (DPF) are used. Recently, SCR coated filters have been introduced, which captures soot and simultaneously is active for ammonia SCR. There are large advantages with using SCR coated filters, such as decreased volume and also better light off characteristics, since both the SCR function as well as filter function is close to the engine. The objective of this work was to examine the effect of soot, produced using an engine bench, on Cu/SSZ-13 catalysts. The impact of soot on Cu/SSZ-13 in standard SCR, NH3 oxidation, NH3 temperature programmed desorption (TPD), as well as soot oxidation (with and without water) was examined using flow reactor measurements. In all experiments, prior to the soot loading, the fresh activity of Cu/SSZ-13 was recorded with stepwise increasing the temperature from 100°C till 600°C. Thereafter, the sample was loaded with soot and the experiment was repeated in the temperature range from 100°C till 700°C. The amount of CO and CO2 produced in each experiment is used to calculate the soot oxidized at each steady state temperature. The soot oxidized during the heating to next temperature step is included, e.g. the CO+CO2 produced when increasing the temperature to 600°C is added to the 600°C step. The influence of the two factors seem to be of the most importance to soot oxidation: ammonia and water. The influence of water on soot oxidation shift the maximum of CO2 and CO production towards lower temperatures, thus water increases the soot oxidation. Moreover, when adding ammonia to the system it is clear that the soot oxidation is lowered in the presence of ammonia, resulting in larger integrated COx at 500°C for O2+H2O, while opposite results at 600 °C was received where more was oxidised for O2+H2O+NH3 case. To conclude the presence of ammonia reduces the soot oxidation, which is in line with the ammonia TPD results where we found ammonia storage on the soot. Interestingly, during ammonia SCR conditions the activity for soot oxidation is regained at 500°C. At this high temperature the SCR zone is very short, thus the majority of the catalyst is not exposed to ammonia and therefore the inhibition effect of ammonia is not observed.

Keywords: NH3-SCR, Cu/SSZ-13, soot, zeolite

Procedia PDF Downloads 236
257 Numerical Investigation into Capture Efficiency of Fibrous Filters

Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard

Abstract:

Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.

Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory

Procedia PDF Downloads 208
256 The Excess Loop Delay Calibration in a Bandpass Continuous-Time Delta Sigma Modulators Based on Q-Enhanced LC Filter

Authors: Sorore Benabid

Abstract:

The Q-enhanced LC filters are the most used architecture in the Bandpass (BP) Continuous-Time (CT) Delta-Sigma (ΣΔ) modulators, due to their: high frequencies operation, high linearity than the active filters and a high quality factor obtained by Q-enhanced technique. This technique consists of the use of a negative resistance that compensate the ohmic losses in the on-chip inductor. However, this technique introduces a zero in the filter transfer function which will affect the modulator performances in term of Dynamic Range (DR), stability and in-band noise (Signal-to-Noise Ratio (SNR)). In this paper, we study the effect of this zero and we demonstrate that a calibration of the excess loop delay (ELD) is required to ensure the best performances of the modulator. System level simulations are done for a 2ndorder BP CT (ΣΔ) modulator at a center frequency of 300MHz. Simulation results indicate that the optimal ELD should be reduced by 13% to achieve the maximum SNR and DR compared to the ideal LC-based ΣΔ modulator.

Keywords: continuous-time bandpass delta-sigma modulators, excess loop delay, on-chip inductor, Q-enhanced LC filter

Procedia PDF Downloads 329
255 A Novel Dual Band-pass filter Based On Coupling of Composite Right/Left Hand CPW and (CSRRs) Uses Ferrite Components

Authors: Mohammed Berka, Khaled Merit

Abstract:

Recent works on microwave filters show that the constituent materials such filters are very important in the design and realization. Several solutions have been proposed to improve the qualities of filtering. In this paper, we propose a new dual band-pass filter based on the coupling of a composite (CRLH) coplanar waveguide with complementary split ring resonators (CSRRs). The (CRLH) CPW is composed of two resonators, each one has an interdigital capacitor (CID) and two short-circuited stubs parallel to top ground plane. On the lower ground plane, we use defected ground structure technology (DGS) to engrave two (CSRRs) offered with different shapes and dimensions. Between the top ground plane and the substrate, we place a ferrite layer to control the electromagnetic coupling between (CRLH) CPW and (CSRRs). The global filter that has coplanar access will have a dual band-pass behavior around the magnetic resonances of (CSRRs). Since there’s no scientific or experimental result in the literature for this kind of complicated structure, it was necessary to perform simulation using HFSS Ansoft designer.

Keywords: complementary split ring resonators, coplanar waveguide, ferrite, filter, stub.

Procedia PDF Downloads 403
254 Improving the Frequency Response of a Circular Dual-Mode Resonator with a Reconfigurable Bandwidth

Authors: Muhammad Haitham Albahnassi, Adnan Malki, Shokri Almekdad

Abstract:

In this paper, a method for reconfiguring bandwidth in a circular dual-mode resonator is presented. The method concerns the optimized geometry of a structure that may be used to host the tuning elements, which are typically RF (Radio Frequency) switches. The tuning elements themselves, and their performance during tuning, are not the focus of this paper. The designed resonator is able to reconfigure its fractional bandwidth by adjusting the inter-coupling level between the degenerate modes, while at the same time improving its response by adjusting the external-coupling level and keeping the center frequency fixed. The inter-coupling level has been adjusted by changing the dimensions of the perturbation element, while the external-coupling level has been adjusted by changing one of the feeder dimensions. The design was arrived at via optimization. Agreeing simulation and measurement results of the designed and implemented filters showed good improvements in return loss values and the stability of the center frequency.

Keywords: dual-mode resonators, perturbation theory, reconfigurable filters, software defined radio, cognitine radio

Procedia PDF Downloads 168
253 Modelling of Filters CO2 (Carbondioxide) and CO (Carbonmonoxide) Portable in Motor Vehicle's Exhaust with Absorbent Chitosan

Authors: Yuandanis Wahyu Salam, Irfi Panrepi, Nuraeni

Abstract:

The increased of greenhouse gases, that is CO2 (carbondioxide) in atmosphere induce the rising of earth’s surface average temperature. One of the largest contributors to greenhouse gases is motor vehicles. Smoke which is emitted by motor’s exhaust containing gases such as CO2 (carbondioxide) and CO (carbon monoxide). Chemically, chitosan is cellulose like plant fiber that has the ability to bind like absorbant foam. Chitosan is a natural antacid (absorb toxins), when chitosan is spread over the surface of water, chitosan is able to absorb fats, oils, heavy metals, and other toxic substances. Judging from the nature of chitosan is able to absorb various toxic substances, it is expected that chitosan is also able to filter out gas emission from the motor vehicles. This study designing a carbondioxide filter in the exhaust of motor vehicles using chitosan as its absorbant. It aims to filter out gases in the exhaust so that CO2 and CO can be reducted before emitted by exhaust. Form of this reseach is study of literature and applied with experimental research of tool manufacture. Data collected through documentary studies by studying books, magazines, thesis, search on the internet as well as the relevant reference. This study will produce a filters which has main function to filter out CO2 and CO emissions that generated by vehicle’s exhaust and can be used as portable.

Keywords: filter, carbon, carbondioxide, exhaust, chitosan

Procedia PDF Downloads 352
252 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 291
251 Strategy of Inventory Analysis with Economic Order Quantity and Quick Response: Case on Filter Inventory for Heavy Equipment in Indonesia

Authors: Lim Sanny, Felix Christian

Abstract:

The use of heavy equipment in Indonesia is always increasing. Cost reduction in procurement of spare parts is the aim of the company. The spare parts in this research are focused in the kind of filters. On the early step, the choosing of priority filter will be studied further by using the ABC analysis. To find out future demand of the filter, this research is using demand forecast by utilizing the QM software for windows. And to find out the best method of inventory control for each kind of filter is by comparing the total cost of Economic Order Quantity and Quick response inventory method. For the three kind of filters which are Cartridge, Engine oil – pn : 600-211-123, Element, Transmission – pn : 424-16-11140, and Element, Hydraulic – pn : 07063-01054, the best forecasting method is Linear regression. The best method for inventory control of Cartridge, Engine oil – pn : 600-211-123 and Element, Transmission – pn : 424-16-11140, is Quick Response Inventory, while the best method for Element, Hydraulic – pn : 07063-01054 is Economic Order Quantity.

Keywords: strategy, inventory, ABC analysis, forecasting, economic order quantity, quick response inventory

Procedia PDF Downloads 365
250 Active Filtration of Phosphorus in Ca-Rich Hydrated Oil Shale Ash Filters: The Effect of Organic Loading and Form of Precipitated Phosphatic Material

Authors: Päärn Paiste, Margit Kõiv, Riho Mõtlep, Kalle Kirsimäe

Abstract:

For small-scale wastewater management, the treatment wetlands (TWs) as a low cost alternative to conventional treatment facilities, can be used. However, P removal capacity of TW systems is usually problematic. P removal in TWs is mainly dependent on the physico–chemical and hydrological properties of the filter material. Highest P removal efficiency has been shown trough Ca-phosphate precipitation (i.e. active filtration) in Ca-rich alkaline filter materials, e.g. industrial by-products like hydrated oil shale ash (HOSA), metallurgical slags. In this contribution we report preliminary results of a full-scale TW system using HOSA material for P removal for a municipal wastewater at Nõo site, Estonia. The main goals of this ongoing project are to evaluate: a) the long-term P removal efficiency of HOSA using real waste water; b) the effect of high organic loading rate; c) variable P-loading effects on the P removal mechanism (adsorption/direct precipitation); and d) the form and composition of phosphate precipitates. Onsite full-scale experiment with two concurrent filter systems for treatment of municipal wastewater was established in September 2013. System’s pretreatment steps include septic tank (2 m2) and vertical down-flow LECA filters (3 m2 each), followed by horizontal subsurface HOSA filters (effective volume 8 m3 each). Overall organic and hydraulic loading rates of both systems are the same. However, the first system is operated in a stable hydraulic loading regime and the second in variable loading regime that imitates the wastewater production in an average household. Piezometers for water and perforated sample containers for filter material sampling were incorporated inside the filter beds to allow for continuous in-situ monitoring. During the 18 months of operation the median removal efficiency (inflow to outflow) of both systems were over 99% for TP, 93% for COD and 57% for TN. However, we observed significant differences in the samples collected in different points inside the filter systems. In both systems, we observed development of preferred flow paths and zones with high and low loadings. The filters show formation and a gradual advance of a “dead” zone along the flow path (zone with saturated filter material characterized by ineffective removal rates), which develops more rapidly in the system working under variable loading regime. The formation of the “dead” zone is accompanied by the growth of organic substances on the filter material particles that evidently inhibit the P removal. Phase analysis of used filter materials using X-ray diffraction method reveals formation of minor amounts of amorphous Ca-phosphate precipitates. This finding is supported by ATR-FTIR and SEM-EDS measurements, which also reveal Ca-phosphate and authigenic carbonate precipitation. Our first experimental results demonstrate that organic pollution and loading regime significantly affect the performance of hydrated ash filters. The material analyses also show that P is incorporated into a carbonate substituted hydroxyapatite phase.

Keywords: active filtration, apatite, hydrated oil shale ash, organic pollution, phosphorus

Procedia PDF Downloads 275
249 Acoustic Echo Cancellation Using Different Adaptive Algorithms

Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil

Abstract:

An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.

Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)

Procedia PDF Downloads 80
248 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images

Authors: Ki Moo Lim, Iman R. Tayibnapis

Abstract:

According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.

Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis

Procedia PDF Downloads 329
247 Harmonic Mitigation and Total Harmonic Distortion Reduction in Grid-Connected PV Systems: A Case Study Using Real-Time Data and Filtering Techniques

Authors: Atena Tazikeh Lemeski, Ismail Ozdamar

Abstract:

This study presents a detailed analysis of harmonic distortion in a grid-connected photovoltaic (PV) system using real-time data captured from a solar power plant. Harmonics introduced by inverters in PV systems can degrade power quality and lead to increased Total Harmonic Distortion (THD), which poses challenges such as transformer overheating, increased power losses, and potential grid instability. This research addresses these issues by applying Fast Fourier Transform (FFT) to identify significant harmonic components and employing notch filters to target specific frequencies, particularly the 3rd harmonic (150 Hz), which was identified as the largest contributor to THD. Initial analysis of the unfiltered voltage signal revealed a THD of 21.15%, with prominent harmonic peaks at 150 Hz, 250 Hz and 350 Hz, corresponding to the 3rd, 5th, and 7th harmonics, respectively. After implementing the notch filters, the THD was reduced to 5.72%, demonstrating the effectiveness of this approach in mitigating harmonic distortion without affecting the fundamental frequency. This paper provides practical insights into the application of real-time filtering techniques in PV systems and their role in improving overall grid stability and power quality. The results indicate that targeted harmonic mitigation is crucial for the sustainable integration of renewable energy sources into modern electrical grids.

Keywords: grid-connected photovoltaic systems, fast Fourier transform, harmonic filtering, inverter-induced harmonics

Procedia PDF Downloads 41
246 Na Doped ZnO UV Filters with Reduced Photocatalytic Activity for Sunscreen Application

Authors: Rafid Mueen, Konstantin Konstantinov, Micheal Lerch, Zhenxiang Cheng

Abstract:

In the past two decades, the concern for skin protection from ultraviolet (UV) radiation has attracted considerable attention due to the increased intensity of UV rays that can reach the Earth’s surface as a result of the breakdown of ozone layer. Recently, UVA has also attracted attention, since, in comparison to UVB, it can penetrate deeply into the skin, which can result in significant health concerns. Sunscreen agents are one of the significant tools to protect the skin from UV irradiation, and it is either organic or in organic. Developing of inorganic UV blockers is essential, which provide efficient UV protection over a wide spectrum rather than organic filters. Furthermore inorganic UV blockers are good comfort, and high safety when applied on human skin. Inorganic materials can absorb, reflect, or scatter the ultraviolet radiation, depending on their particle size, unlike the organic blockers, which absorb the UV irradiation. Nowadays, most inorganic UV-blocking filters are based on (TiO2) and ZnO). ZnO can provide protection in the UVA range. Indeed, ZnO is attractive for in sunscreen formulization, and this relates to many advantages, such as its modest refractive index (2.0), absorption of a small fraction of solar radiation in the UV range which is equal to or less than 385 nm, its high probable recombination of photogenerated carriers (electrons and holes), large direct band gap, high exciton binding energy, non-risky nature, and high tendency towards chemical and physical stability which make it transparent in the visible region with UV protective activity. A significant issue for ZnO use in sunscreens is that it can generate ROS in the presence of UV light because of its photocatalytic activity. Therefore it is essential to make a non-photocatalytic material through modification by other metals. Several efforts have been made to deactivate the photocatalytic activity of ZnO by using inorganic surface modifiers. The doping of ZnO by different metals is another way to modify its photocatalytic activity. Recently, successful doping of ZnO with different metals such as Ce, La, Co, Mn, Al, Li, Na, K, and Cr by various procedures, such as a simple and facile one pot water bath, co-precipitation, hydrothermal, solvothermal, combustion, and sol gel methods has been reported. These materials exhibit greater performance than undoped ZnO towards increasing the photocatalytic activity of ZnO in visible light. Therefore, metal doping can be an effective technique to modify the ZnO photocatalytic activity. However, in the current work, we successfully reduce the photocatalytic activity of ZnO through Na doped ZnO fabricated via sol-gel and hydrothermal methods.

Keywords: photocatalytic, ROS, UVA, ZnO

Procedia PDF Downloads 144