Search results for: Barycentric lagrange interpolation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 181

Search results for: Barycentric lagrange interpolation

151 A Novel Deinterlacing Algorithm Based on Adaptive Polynomial Interpolation

Authors: Seung-Won Jung, Hye-Soo Kim, Le Thanh Ha, Seung-Jin Baek, Sung-Jea Ko

Abstract:

In this paper, a novel deinterlacing algorithm is proposed. The proposed algorithm approximates the distribution of the luminance into a polynomial function. Instead of using one polynomial function for all pixels, different polynomial functions are used for the uniform, texture, and directional edge regions. The function coefficients for each region are computed by matrix multiplications. Experimental results demonstrate that the proposed method performs better than the conventional algorithms.

Keywords: Deinterlacing, polynomial interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1335
150 Visualization of Sediment Thickness Variation for Sea Bed Logging using Spline Interpolation

Authors: Hanita Daud, Noorhana Yahya, Vijanth Sagayan, Muizuddin Talib

Abstract:

This paper discusses on the use of Spline Interpolation and Mean Square Error (MSE) as tools to process data acquired from the developed simulator that shall replicate sea bed logging environment. Sea bed logging (SBL) is a new technique that uses marine controlled source electromagnetic (CSEM) sounding technique and is proven to be very successful in detecting and characterizing hydrocarbon reservoirs in deep water area by using resistivity contrasts. It uses very low frequency of 0.1Hz to 10 Hz to obtain greater wavelength. In this work the in house built simulator was used and was provided with predefined parameters and the transmitted frequency was varied for sediment thickness of 1000m to 4000m for environment with and without hydrocarbon. From series of simulations, synthetics data were generated. These data were interpolated using Spline interpolation technique (degree of three) and mean square error (MSE) were calculated between original data and interpolated data. Comparisons were made by studying the trends and relationship between frequency and sediment thickness based on the MSE calculated. It was found that the MSE was on increasing trends in the set up that has the presence of hydrocarbon in the setting than the one without. The MSE was also on decreasing trends as sediment thickness was increased and with higher transmitted frequency.

Keywords: Spline Interpolation, Mean Square Error, Sea Bed Logging, Controlled Source Electromagnetic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
149 Threshold Based Region Incrementing Secret Sharing Scheme for Color Images

Authors: P. Mohamed Fathimal, P. Arockia Jansi Rani

Abstract:

In this era of online communication, which transacts data in 0s and 1s, confidentiality is a priced commodity. Ensuring safe transmission of encrypted data and their uncorrupted recovery is a matter of prime concern. Among the several techniques for secure sharing of images, this paper proposes a k out of n region incrementing image sharing scheme for color images. The highlight of this scheme is the use of simple Boolean and arithmetic operations for generating shares and the Lagrange interpolation polynomial for authenticating shares. Additionally, this scheme addresses problems faced by existing algorithms such as color reversal and pixel expansion. This paper regenerates the original secret image whereas the existing systems regenerates only the half toned secret image.

Keywords: Threshold Secret Sharing Scheme, Access Control, Steganography, Authentication, Secret Image Sharing, XOR, Pixel Expansion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1086
148 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri

Abstract:

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
147 Single Frame Supercompression of Still Images,Video, High Definition TV and Digital Cinema

Authors: Mario Mastriani

Abstract:

Super-resolution is nowadays used for a high-resolution image produced from several low-resolution noisy frames. In this work, we consider the problem of high-quality interpolation of a single noise-free image. Such images may come from different sources, i.e., they may be frames of videos, individual pictures, etc. On the other hand, in the encoder we apply a downsampling via bidimen-sional interpolation of each frame, and in the decoder we apply a upsampling by which we restore the original size of the image. If the compression ratio is very high, then we use a convolutive mask that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. In fact, the mentioned mask is coded inside texture memory of a GPGPU.

Keywords: General-Purpose computation on Graphics ProcessingUnits, Image Compression, Interpolation, Super-resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
146 A Survey on Lossless Compression of Bayer Color Filter Array Images

Authors: Alina Trifan, António J. R. Neves

Abstract:

Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.

Keywords: Bayer images, CFA, losseless compression, image coding standards.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484
145 Hydrodynamic Simulation of Co-Current and Counter Current of Column Distillation Using Euler Lagrange Approach

Authors: H. Troudi, M. Ghiss, Z. Tourki, M. Ellejmi

Abstract:

Packed columns of liquefied petroleum gas (LPG) consists of separating the liquid mixture of propane and butane to pure gas components by the distillation phenomenon. The flow of the gas and liquid inside the columns is operated by two ways: The co-current and the counter current operation. Heat, mass and species transfer between phases represent the most important factors that influence the choice between those two operations. In this paper, both processes are discussed using computational CFD simulation through ANSYS-Fluent software. Only 3D half section of the packed column was considered with one packed bed. The packed bed was characterized in our case as a porous media. The simulations were carried out at transient state conditions. A multi-component gas and liquid mixture were used out in the two processes. We utilized the Euler-Lagrange approach in which the gas was treated as a continuum phase and the liquid as a group of dispersed particles. The heat and the mass transfer process was modeled using multi-component droplet evaporation approach. The results show that the counter-current process performs better than the co-current, although such limitations of our approach are noted. This comparison gives accurate results for computations times higher than 2 s, at different gas velocity and at packed bed porosity of 0.9.

Keywords: Co-current, counter current, Euler Lagrange model, heat transfer, mass transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
144 Generating Arabic Fonts Using Rational Cubic Ball Functions

Authors: Fakharuddin Ibrahim, Jamaludin Md. Ali, Ahmad Ramli

Abstract:

In this paper, we will discuss about the data interpolation by using the rational cubic Ball curve. To generate a curve with a better and satisfactory smoothness, the curve segments must be connected with a certain amount of continuity. The continuity that we will consider is of type G1 continuity. The conditions considered are known as the G1 Hermite condition. A simple application of the proposed method is to generate an Arabic font satisfying the required continuity.

Keywords: Continuity, data interpolation, Hermite condition, rational Ball curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
143 Numerical Solution of Hammerstein Integral Equations by Using Quasi-Interpolation

Authors: M. Zarebnia, S. Khani

Abstract:

In this paper first, a numerical method based on quasiinterpolation for solving nonlinear Fredholm integral equations of the Hammerstein-type is presented. Then, we approximate the solution of Hammerstein integral equations by Nystrom’s method. Also, we compare the methods with some numerical examples.

Keywords: Hammerstein integral equations, quasi-interpolation, Nystrom’s method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4420
142 Interpolation Issue in PVNPG-14M Application for Technical Control of Artillery Fire

Authors: Martin Blaha, Ladislav Potužák, Daniel Holesz

Abstract:

This paper focused on application support for technical control of artillery units – PVNPG-14M, especially on interpolation issue. Artillery units of the Army of the Czech Republic, reflecting the current global security neighborhood, can be used outside the Czech Republic. The paper presents principles, evolution and calculation in the process of complete preparation. The paper presents expertise using of application of current artillery communication and information system and suggests the perspective future system. The paper also presents problems in process of complete preparing of fire especially problems in permanently information (firing table) and calculated values. The paper presents problems of current artillery communication and information system and suggests requirements of the future system.

Keywords: Fire for effect, application, fire control, interpolation method, software development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1089
141 Implementation of Meshless FEM for Engineering Applications

Authors: A. Seidl, Th. Schmidt

Abstract:

Meshless Finite Element Methods, namely element-free Galerkin and point-interpolation method were implemented and tested concerning their applicability to typical engineering problems like electrical fields and structural mechanics. A class-structure was developed which allows a consistent implementation of these methods together with classical FEM in a common framework. Strengths and weaknesses of the methods under investigation are discussed. As a result of this work joint usage of meshless methods together with classical Finite Elements are recommended.

Keywords: Finite Elements, meshless, element-free Galerkin, point-interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
140 Using Lagrange Equations to Study the Relative Motion of a Mechanism

Authors: R. A. Petre, S. E. Nichifor, A. Craifaleanu, I. Stroe

Abstract:

The relative motion of a robotic arm formed by homogeneous bars of different lengths and masses, hinged to each other is investigated. The first bar of the mechanism is articulated on a platform, considered initially fixed on the surface of the Earth, while for the second case the platform is considered to be in rotation with respect to the Earth. For both analyzed cases the motion equations are determined using the Lagrangian formalism, applied in its traditional form, valid with respect to an inertial reference system, conventionally considered as fixed. However, in the second case, a generalized form of the formalism valid with respect to a non-inertial reference frame will also be applied. The numerical calculations were performed using a MATLAB program.

Keywords: Lagrange equations, relative motion, inertial or non-inertial reference frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489
139 Low Resolution Face Recognition Using Mixture of Experts

Authors: Fatemeh Behjati Ardakani, Fatemeh Khademian, Abbas Nowzari Dalini, Reza Ebrahimpour

Abstract:

Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 × 48 ORL images to 12 × 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set.

Keywords: Low resolution face recognition, Multilayered neuralnetwork, Mixture of experts neural network, Principal componentanalysis, Bicubic interpolation, Nearest neighbor interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665
138 Using Divergent Nozzle with Aerodynamic Lens to Focus Nanoparticles

Authors: Hasan Jumaah Mrayeh, Fue-Sang Lien

Abstract:

ANSYS Fluent will be used to simulate Computational Fluid Dynamics (CFD) for an efficient lens and nozzle design which will be explained in this paper. We have designed and characterized an aerodynamic lens and a divergent nozzle for focusing flow that transmits sub 25 nm particles through the aerodynamic lens. The design of the lens and nozzle has been improved using CFD for particle trajectories. We obtained a case for calculating nanoparticles (25 nm) flowing through the aerodynamic lens and divergent nozzle. Nanoparticles are transported by air, which is pumped into the aerodynamic lens through the nozzle at 1 atmospheric pressure. We have also developed a computational methodology that can determine the exact focus characteristics of aerodynamic lens systems. Particle trajectories were traced using the Lagrange approach. The simulation shows the ability of the aerodynamic lens to focus on 25 nm particles after using a divergent nozzle.

Keywords: Aerodynamic lens AL, divergent nozzle DN, ANSYS Fluent, Lagrange approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 923
137 Enhancement of Stereo Video Pairs Using SDNs To Aid In 3D Reconstruction

Authors: Lewis E. Hibell, Honghai Liu, David J. Brown

Abstract:

This paper presents the results of enhancing images from a left and right stereo pair in order to increase the resolution of a 3D representation of a scene generated from that same pair. A new neural network structure known as a Self Delaying Dynamic Network (SDN) has been used to perform the enhancement. The advantage of SDNs over existing techniques such as bicubic interpolation is their ability to cope with motion and noise effects. SDNs are used to generate two high resolution images, one based on frames taken from the left view of the subject, and one based on the frames from the right. This new high resolution stereo pair is then processed by a disparity map generator. The disparity map generated is compared to two other disparity maps generated from the same scene. The first is a map generated from an original high resolution stereo pair and the second is a map generated using a stereo pair which has been enhanced using bicubic interpolation. The maps generated using the SDN enhanced pairs match more closely the target maps. The addition of extra noise into the input images is less problematic for the SDN system which is still able to out perform bicubic interpolation.

Keywords: Genetic Evolution, Image Enhancement, Neuron Networks, Stereo Vision

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
136 A Meshfree Solution of Tow-Dimensional Potential Flow Problems

Authors: I. V. Singh, A. Singh

Abstract:

In this paper, mesh-free element free Galerkin (EFG) method is extended to solve two-dimensional potential flow problems. Two ideal fluid flow problems (i.e. flow over a rigid cylinder and flow over a sphere) have been formulated using variational approach. Penalty and Lagrange multiplier techniques have been utilized for the enforcement of essential boundary conditions. Four point Gauss quadrature have been used for the integration on two-dimensional domain (Ω) and nodal integration scheme has been used to enforce the essential boundary conditions on the edges (┌). The results obtained by EFG method are compared with those obtained by finite element method. The effects of scaling and penalty parameters on EFG results have also been discussed in detail.

Keywords: Meshless, EFG method, potential flow, Lagrange multiplier method, penalty method, penalty parameter and scaling parameter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
135 Evaluating Sinusoidal Functions by a Low Complexity Cubic Spline Interpolator with Error Optimization

Authors: Abhijit Mitra, Harpreet Singh Dhillon

Abstract:

We present a novel scheme to evaluate sinusoidal functions with low complexity and high precision using cubic spline interpolation. To this end, two different approaches are proposed to find the interpolating polynomial of sin(x) within the range [- π , π]. The first one deals with only a single data point while the other with two to keep the realization cost as low as possible. An approximation error optimization technique for cubic spline interpolation is introduced next and is shown to increase the interpolator accuracy without increasing complexity of the associated hardware. The architectures for the proposed approaches are also developed, which exhibit flexibility of implementation with low power requirement.

Keywords: Arithmetic, spline interpolator, hardware design, erroranalysis, optimization methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001
134 Numerical Grid Generation of Oceanic Model for the Andaman Sea

Authors: Nitima Aschariyaphotha, Pratan Sakkaplangkul, Anirut Luadsong

Abstract:

The study of the Andaman Sea can be studied by using the oceanic model; therefore the grid covering the study area should be generated. This research aims to generate grid covering the Andaman Sea, situated between longitudes 90◦E to 101◦E and latitudes 1◦N to 18◦N. A horizontal grid is an orthogonal curvilinear with 87 × 217 grid points. The methods used in this study are cubic spline and bilinear interpolations. The boundary grid points are generated by spline interpolation while the interior grid points have to be specified by bilinear interpolation method. A vertical grid is sigma coordinate with 15 layers of water column.

Keywords: Sigma Coordinate, Curvilinear Coordinate, AndamanSea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
133 The Approximate Solution of Linear Fuzzy Fredholm Integral Equations of the Second Kind by Using Iterative Interpolation

Authors: N. Parandin, M. A. Fariborzi Araghi

Abstract:

in this paper, we propose a numerical method for the approximate solution of fuzzy Fredholm functional integral equations of the second kind by using an iterative interpolation. For this purpose, we convert the linear fuzzy Fredholm integral equations to a crisp linear system of integral equations. The proposed method is illustrated by some fuzzy integral equations in numerical examples.

Keywords: Fuzzy function integral equations, Iterative method, Linear systems, Parametric form of fuzzy number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
132 Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling

Authors: Negar Riazifar, Nigel G. Stocks

Abstract:

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals does not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.

Keywords: Level crossing sampling, numerical stability, speech processing, trigonometric polynomial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 342
131 Piecewise Interpolation Filter for Effective Processing of Large Signal Sets

Authors: Anatoli Torokhti, Stanley Miklavcic

Abstract:

Suppose KY and KX are large sets of observed and reference signals, respectively, each containing N signals. Is it possible to construct a filter F : KY → KX that requires a priori information only on few signals, p  N, from KX but performs better than the known filters based on a priori information on every reference signal from KX? It is shown that the positive answer is achievable under quite unrestrictive assumptions. The device behind the proposed method is based on a special extension of the piecewise linear interpolation technique to the case of random signal sets. The proposed technique provides a single filter to process any signal from the arbitrarily large signal set. The filter is determined in terms of pseudo-inverse matrices so that it always exists.

Keywords: Wiener filter, filtering of stochastic signals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
130 Simulation of Organic Matter Variability on a Sugarbeet Field Using the Computer Based Geostatistical Methods

Authors: M. Rüstü Karaman, Tekin Susam, Fatih Er, Servet Yaprak, Osman Karkacıer

Abstract:

Computer based geostatistical methods can offer effective data analysis possibilities for agricultural areas by using vectorial data and their objective informations. These methods will help to detect the spatial changes on different locations of the large agricultural lands, which will lead to effective fertilization for optimal yield with reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples were taken from a sugar beet field by 20 x 20 m grids. Plant samples were also collected from the same plots. Some physical and chemical analyses for these samples were made by routine methods. According to derived variation coefficients, topsoil organic matter (OM) distribution was more than subsoil OM distribution. The highest C.V. value of 17.79% was found for topsoil OM. The data were analyzed comparatively according to kriging methods which are also used widely in geostatistic. Several interpolation methods (Ordinary,Simple and Universal) and semivariogram models (Spherical, Exponential and Gaussian) were tested in order to choose the suitable methods. Average standard deviations of values estimated by simple kriging interpolation method were less than average standard deviations (topsoil OM ± 0.48, N ± 0.37, subsoil OM ± 0.18) of measured values. The most suitable interpolation method was simple kriging method and exponantial semivariogram model for topsoil, whereas the best optimal interpolation method was simple kriging method and spherical semivariogram model for subsoil. The results also showed that these computer based geostatistical methods should be tested and calibrated for different experimental conditions and semivariogram models.

Keywords: Geostatistic, kriging, organic matter, sugarbeet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
129 Mathematical Programming on Multivariate Calibration Estimation in Stratified Sampling

Authors: Dinesh Rao, M.G.M. Khan, Sabiha Khan

Abstract:

Calibration estimation is a method of adjusting the original design weights to improve the survey estimates by using auxiliary information such as the known population total (or mean) of the auxiliary variables. A calibration estimator uses calibrated weights that are determined to minimize a given distance measure to the original design weights while satisfying a set of constraints related to the auxiliary information. In this paper, we propose a new multivariate calibration estimator for the population mean in the stratified sampling design, which incorporates information available for more than one auxiliary variable. The problem of determining the optimum calibrated weights is formulated as a Mathematical Programming Problem (MPP) that is solved using the Lagrange multiplier technique.

Keywords: Calibration estimation, Stratified sampling, Multivariate auxiliary information, Mathematical programming problem, Lagrange multiplier technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872
128 A Computationally Efficient Design for Prototype Filters of an M-Channel Cosine Modulated Filter Bank

Authors: Neela. R. Rayavarapu, Neelam Rup Prakash

Abstract:

The paper discusses a computationally efficient method for the design of prototype filters required for the implementation of an M-band cosine modulated filter bank. The prototype filter is formulated as an optimum interpolated FIR filter. The optimum interpolation factor requiring minimum number of multipliers is used. The model filter as well as the image suppressor will be designed using the Kaiser window. The method will seek to optimize a single parameter namely cutoff frequency to minimize the distortion in the overlapping passband.

Keywords: Cosine modulated filter bank, interpolated FIR filter, optimum interpolation factor, prototype filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
127 Symbolic Analysis of Large Circuits Using Discrete Wavelet Transform

Authors: Ali Al-Ataby , Fawzi Al-Naima

Abstract:

Symbolic Circuit Analysis (SCA) is a technique used to generate the symbolic expression of a network. It has become a well-established technique in circuit analysis and design. The symbolic expression of networks offers excellent way to perform frequency response analysis, sensitivity computation, stability measurements, performance optimization, and fault diagnosis. Many approaches have been proposed in the area of SCA offering different features and capabilities. Numerical Interpolation methods are very common in this context, especially by using the Fast Fourier Transform (FFT). The aim of this paper is to present a method for SCA that depends on the use of Wavelet Transform (WT) as a mathematical tool to generate the symbolic expression for large circuits with minimizing the analysis time by reducing the number of computations.

Keywords: Numerical Interpolation, Sparse Matrices, SymbolicAnalysis, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
126 Implicit Two Step Continuous Hybrid Block Methods with Four Off-Steps Points for Solving Stiff Ordinary Differential Equation

Authors: O. A. Akinfenwa, N.M. Yao, S. N. Jator

Abstract:

In this paper, a self starting two step continuous block hybrid formulae (CBHF) with four Off-step points is developed using collocation and interpolation procedures. The CBHF is then used to produce multiple numerical integrators which are of uniform order and are assembled into a single block matrix equation. These equations are simultaneously applied to provide the approximate solution for the stiff ordinary differential equations. The order of accuracy and stability of the block method is discussed and its accuracy is established numerically.

Keywords: Collocation and Interpolation, Continuous HybridBlock Formulae, Off-Step Points, Stability, Stiff ODEs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
125 Identifying Blind Spots in a Stereo View for Early Decisions in SI for Fusion based DMVC

Authors: H. Ali, K. Hameed, N. Khan

Abstract:

In DMVC, we have more than one options of sources available for construction of side information. The newer techniques make use of both the techniques simultaneously by constructing a bitmask that determines the source of every block or pixel of the side information. A lot of computation is done to determine each bit in the bitmask. In this paper, we have tried to define areas that can only be well predicted by temporal interpolation and not by multiview interpolation or synthesis. We predict that all such areas that are not covered by two cameras cannot be appropriately predicted by multiview synthesis and if we can identify such areas in the first place, we don-t need to go through the script of computations for all the pixels that lie in those areas. Moreover, this paper also defines a technique based on KLT to mark the above mentioned areas before any other processing is done on the side view.

Keywords: Side Information, Distributed Multiview Video Coding, Fusion, Early Decision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1283
124 Improved Power Spectrum Estimation for RR-Interval Time Series

Authors: B. S. Saini, Dilbag Singh, Moin Uddin, Vinod Kumar

Abstract:

The RR interval series is non-stationary and unevenly spaced in time. For estimating its power spectral density (PSD) using traditional techniques like FFT, require resampling at uniform intervals. The researchers have used different interpolation techniques as resampling methods. All these resampling methods introduce the low pass filtering effect in the power spectrum. The lomb transform is a means of obtaining PSD estimates directly from irregularly sampled RR interval series, thus avoiding resampling. In this work, the superiority of Lomb transform method has been established over FFT based approach, after applying linear and cubicspline interpolation as resampling methods, in terms of reproduction of exact frequency locations as well as the relative magnitudes of each spectral component.

Keywords: HRV, Lomb Transform, Resampling, RR-intervals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3166
123 Method of Finding Aerodynamic Characteristic Equations of Missile for Trajectory Simulation

Authors: Attapon Charoenpon, Ekkarach Pankeaw

Abstract:

This paper present a new way to find the aerodynamic characteristic equation of missile for the numerical trajectories prediction more accurate. The goal is to obtain the polynomial equation based on two missile characteristic parameters, angle of attack (α ) and flight speed (╬¢ ). First, the understudied missile is modeled and used for flow computational model to compute aerodynamic force and moment. Assume that performance range of understudied missile where range -10< α <10 and 0< ╬¢ <200. After completely obtained results of all cases, the data are fit by polynomial interpolation to create equation of each case and then combine all equations to form aerodynamic characteristic equation, which will be used for trajectories simulation.

Keywords: Aerodynamic, Characteristic Equation, Angle ofAttack, Polynomial interpolation, Trajectories

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3606
122 Image Enhancement of Medical Images using Gabor Filter Bank on Hexagonal Sampled Grids

Authors: Veni.S , K.A.Narayanankutty

Abstract:

For about two decades scientists have been developing techniques for enhancing the quality of medical images using Fourier transform, DWT (Discrete wavelet transform),PDE model etc., Gabor wavelet on hexagonal sampled grid of the images is proposed in this work. This method has optimal approximation theoretic performances, for a good quality image. The computational cost is considerably low when compared to similar processing in the rectangular domain. As X-ray images contain light scattered pixels, instead of unique sigma, the parameter sigma of 0.5 to 3 is found to satisfy most of the image interpolation requirements in terms of high Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error (MSE) and better image quality by adopting windowing technique.

Keywords: Hexagonal lattices, Gabor filter, Interpolation, imageprocessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2679