Search results for: Redundant Wavelet Transform (RWT).
582 A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.Keywords: Circular Hough Transform, Coin detection, Covariance matrix, Eigenvalues, Raster scan Algorithm, Texton.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880581 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction
Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz
Abstract:
This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906580 Relevant LMA Features for Human Motion Recognition
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.Keywords: Human motion recognition, Discriminative LMA features, random forest, features reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773579 A Proposed Hybrid Color Image Compression Based on Fractal Coding with Quadtree and Discrete Cosine Transform
Authors: Shimal Das, Dibyendu Ghoshal
Abstract:
Fractal based digital image compression is a specific technique in the field of color image. The method is best suited for irregular shape of image like snow bobs, clouds, flame of fire; tree leaves images, depending on the fact that parts of an image often resemble with other parts of the same image. This technique has drawn much attention in recent years because of very high compression ratio that can be achieved. Hybrid scheme incorporating fractal compression and speedup techniques have achieved high compression ratio compared to pure fractal compression. Fractal image compression is a lossy compression method in which selfsimilarity nature of an image is used. This technique provides high compression ratio, less encoding time and fart decoding process. In this paper, fractal compression with quad tree and DCT is proposed to compress the color image. The proposed hybrid schemes require four phases to compress the color image. First: the image is segmented and Discrete Cosine Transform is applied to each block of the segmented image. Second: the block values are scanned in a zigzag manner to prevent zero co-efficient. Third: the resulting image is partitioned as fractals by quadtree approach. Fourth: the image is compressed using Run length encoding technique.
Keywords: Fractal coding, Discrete Cosine Transform, Iterated Function System (IFS), Affine Transformation, Run length encoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570578 Analyzing Transformation of 1D-Functions for Frequency Domain based Video Classification
Authors: Kahraman Ayyildiz, Stefan Conrad
Abstract:
In this paper we illuminate a frequency domain based classification method for video scenes. Videos from certain topical areas often contain activities with repeating movements. Sports videos, home improvement videos, or videos showing mechanical motion are some example areas. Assessing main and side frequencies of each repeating movement gives rise to the motion type. We obtain the frequency domain by transforming spatio-temporal motion trajectories. Further on we explain how to compute frequency features for video clips and how to use them for classifying. The focus of the experimental phase is on transforms utilized for our system. By comparing various transforms, experiments show the optimal transform for a motion frequency based approach.Keywords: action recognition, frequency, transform, motion recognition, repeating movement, video classification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694577 An Improved ICI Self-Cancellation Scheme for Multi-Carrier Communication Systems
Authors: Arvind Kumar, Rajoo Pandey
Abstract:
For broadband wireless mobile communication systems the orthogonal frequency division multiplexing (OFDM) is a suitable modulation scheme. The frequency offset between transmitter and receiver local oscillator is main drawback of OFDM systems, which causes intercarrier interference (ICI) in the subcarriers of the OFDM system. This ICI degrades the bit error rate (BER) performance of the system. In this paper an improved self-ICI cancellation scheme is proposed to improve the system performance. The proposed scheme is based on discrete Fourier transform-inverse discrete Fourier transform (DFT-IDFT). The simulation results show that there is satisfactory improvement in the bit error rate (BER) performance of the present scheme.Keywords: OFDM, Intercarrier Interference, InterferenceCoefficients, DFT based Self-ICI Cancellation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655576 Dimension Free Rigid Point Set Registration in Linear Time
Authors: Jianqin Qu
Abstract:
This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.Keywords: Covariant point, point matching, dimension free, rigid registration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682575 Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator
Authors: Inder K. Purohit, M. Nazrul Islam, K. Vijayan Asari, Mohammad A. Karim
Abstract:
A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.Keywords: Adaptive progressive thresholding, fringe adjusted filters, image segmentation, joint transform correlation, synthetic discriminant function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1208574 Combination of Different Classifiers for Cardiac Arrhythmia Recognition
Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari
Abstract:
This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2227573 Object Recognition Approach Based on Generalized Hough Transform and Color Distribution Serving in Generating Arabic Sentences
Authors: Nada Farhani, Naim Terbeh, Mounir Zrigui
Abstract:
The recognition of the objects contained in images has always presented a challenge in the field of research because of several difficulties that the researcher can envisage because of the variability of shape, position, contrast of objects, etc. In this paper, we will be interested in the recognition of objects. The classical Hough Transform (HT) presented a tool for detecting straight line segments in images. The technique of HT has been generalized (GHT) for the detection of arbitrary forms. With GHT, the forms sought are not necessarily defined analytically but rather by a particular silhouette. For more precision, we proposed to combine the results from the GHT with the results from a calculation of similarity between the histograms and the spatiograms of the images. The main purpose of our work is to use the concepts from recognition to generate sentences in Arabic that summarize the content of the image.
Keywords: Recognition of shape, generalized hough transformation, histogram, Spatiogram, learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617572 Binary Phase-Only Filter Watermarking with Quantized Embedding
Authors: Hu Haibo, Liu Yi, He Ming
Abstract:
The binary phase-only filter digital watermarking embeds the phase information of the discrete Fourier transform of the image into the corresponding magnitudes for better image authentication. The paper proposed an approach of how to implement watermark embedding by quantizing the magnitude, with discussing how to regulate the quantization steps based on the frequencies of the magnitude coefficients of the embedded watermark, and how to embed the watermark at low frequency quantization. The theoretical analysis and simulation results show that algorithm flexibility, security, watermark imperceptibility and detection performance of the binary phase-only filter digital watermarking can be effectively improved with quantization based watermark embedding, and the robustness against JPEG compression will also be increased to some extent.Keywords: binary phase-only filter, discrete Fourier transform, digital watermarking, image authentication, quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544571 A Signal Driven Adaptive Resolution Short-Time Fourier Transform
Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin
Abstract:
The frequency contents of the non-stationary signals vary with time. For proper characterization of such signals, a smart time-frequency representation is necessary. Classically, the STFT (short-time Fourier transform) is employed for this purpose. Its limitation is the fixed timefrequency resolution. To overcome this drawback an enhanced STFT version is devised. It is based on the signal driven sampling scheme, which is named as the cross-level sampling. It can adapt the sampling frequency and the window function (length plus shape) by following the input signal local variations. This adaptation results into the proposed technique appealing features, which are the adaptive time-frequency resolution and the computational efficiency.Keywords: Level Crossing Sampling, Activity Selection, Adaptive Resolution Analysis, Computational Complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571570 Computation of Probability Coefficients using Binary Decision Diagram and their Application in Test Vector Generation
Authors: Ashutosh Kumar Singh, Anand Mohan
Abstract:
This paper deals with efficient computation of probability coefficients which offers computational simplicity as compared to spectral coefficients. It eliminates the need of inner product evaluations in determination of signature of a combinational circuit realizing given Boolean function. The method for computation of probability coefficients using transform matrix, fast transform method and using BDD is given. Theoretical relations for achievable computational advantage in terms of required additions in computing all 2n probability coefficients of n variable function have been developed. It is shown that for n ≥ 5, only 50% additions are needed to compute all probability coefficients as compared to spectral coefficients. The fault detection techniques based on spectral signature can be used with probability signature also to offer computational advantage.Keywords: Binary Decision Diagrams, Spectral Coefficients, Fault detection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465569 Scintigraphic Image Coding of Region of Interest Based On SPIHT Algorithm Using Global Thresholding and Huffman Coding
Authors: A. Seddiki, M. Djebbouri, D. Guerchi
Abstract:
Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to the lossless compression in the region of interest of Scintigraphic images based on SPIHT algorithm and global transform thresholding using Huffman coding.
Keywords: Global Thresholding Transform, Huffman Coding, Region of Interest, SPIHT Coding, Scintigraphic images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979568 Data Preprocessing for Supervised Leaning
Authors: S. B. Kotsiantis, D. Kanellopoulos, P. E. Pintelas
Abstract:
Many factors affect the success of Machine Learning (ML) on a given task. The representation and quality of the instance data is first and foremost. If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is more difficult. It is well known that data preparation and filtering steps take considerable amount of processing time in ML problems. Data pre-processing includes data cleaning, normalization, transformation, feature extraction and selection, etc. The product of data pre-processing is the final training set. It would be nice if a single sequence of data pre-processing algorithms had the best performance for each data set but this is not happened. Thus, we present the most well know algorithms for each step of data pre-processing so that one achieves the best performance for their data set.Keywords: Data mining, feature selection, data cleaning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6091567 Extracting Tongue Shape Dynamics from Magnetic Resonance Image Sequences
Authors: María S. Avila-García, John N. Carter, Robert I. Damper
Abstract:
An important problem in speech research is the automatic extraction of information about the shape and dimensions of the vocal tract during real-time speech production. We have previously developed Southampton dynamic magnetic resonance imaging (SDMRI) as an approach to the solution of this problem.However, the SDMRI images are very noisy so that shape extraction is a major challenge. In this paper, we address the problem of tongue shape extraction, which poses difficulties because this is a highly deforming non-parametric shape. We show that combining active shape models with the dynamic Hough transform allows the tongue shape to be reliably tracked in the image sequence.
Keywords: Vocal tract imaging, speech production, active shapemodels, dynamic Hough transform, object tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735566 Development of a Basic Robot System for Medical and Nursing Care for Patients with Glaucoma
Authors: Naoto Suzuki
Abstract:
Medical methods to completely treat glaucoma are yet to be developed. Therefore, ophthalmologists manage patients mainly to delay disease progression. Patients with glaucoma are mainly elderly individuals. In elderly people's houses, having an equipment that can provide medical treatment and care can release their family from their care. For elderly people with the glaucoma to live by themselves as much as possible, we developed a support robot having five functions: elderly people care, ophthalmological examination, trip assistance to the neighborhood, medical treatment, and data referral to a hospital. The medical and nursing care robot should approach the visual field that the patients can see at a speed suitable for their eyesight. This is because the robot will be dangerous if it approaches the patients from the visual field that they cannot see. We experimentally developed a robot that brings a white cane to elderly people with glaucoma. The base part of the robot is a carriage, which is a Megarover 1.1, and it has two infrared sensors. The robot moves along a white line on the floor using the infrared sensors and has a special arm, which does not use electricity. The arm can scoop the block attached to the white cane. Next, we also developed a direction detector comprised of a charge-coupled device camera (SVR41ResucueHD; Sun Mechatronics), goggles (MG-277MLF; Midori Anzen Co. Ltd.), and biconvex lenses with a focal length of 25 mm (Edmund Co.). Some young people were photographed using the direction detector, which was put on their faces. Image processing was performed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2. To measure the people's line of vision, we calculated the iris's center of gravity using five processes: reduction, trimming, binarization or gray scale, edge extraction, and Hough transform. We compared the binarization and gray scale processes in image processing. The binarization process was better than the gray scale process. For edge extraction, we compared five methods: Sobel, Prewitt, Laplacian of Gaussian, fast Fourier transform, and Canny. The Canny method was the optimal extraction method. We performed the Hough transform to search for the main coordinates from the iris's edge, and we found that the Hough transform could calculate the center point of the iris.
Keywords: Glaucoma, support robot, elderly people, Hough transform, direction detector, line of vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547565 An Approach Based on Statistics and Multi-Resolution Representation to Classify Mammograms
Authors: Nebi Gedik
Abstract:
One of the significant and continual public health problems in the world is breast cancer. Early detection is very important to fight the disease, and mammography has been one of the most common and reliable methods to detect the disease in the early stages. However, it is a difficult task, and computer-aided diagnosis (CAD) systems are needed to assist radiologists in providing both accurate and uniform evaluation for mass in mammograms. In this study, a multiresolution statistical method to classify mammograms as normal and abnormal in digitized mammograms is used to construct a CAD system. The mammogram images are represented by wave atom transform, and this representation is made by certain groups of coefficients, independently. The CAD system is designed by calculating some statistical features using each group of coefficients. The classification is performed by using support vector machine (SVM).
Keywords: Wave atom transform, statistical features, multi-resolution representation, mammogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882564 Audio Watermarking Using Spectral Modifications
Authors: Jyotsna Singh, Parul Garg, Alok Nath De
Abstract:
In this paper, we present a non-blind technique of adding the watermark to the Fourier spectral components of audio signal in a way such that the modified amplitude does not exceed the maximum amplitude spread (MAS). This MAS is due to individual Discrete fourier transform (DFT) coefficients in that particular frame, which is derived from the Energy Spreading function given by Schroeder. Using this technique one can store double the information within a given frame length i.e. overriding the watermark on the host of equal length with least perceptual distortion. The watermark is uniformly floating on the DFT components of original signal. This helps in detecting any intentional manipulations done on the watermarked audio. Also, the scheme is found robust to various signal processing attacks like presence of multiple watermarks, Additive white gaussian noise (AWGN) and mp3 compression.Keywords: Discrete Fourier Transform, Spreading Function, Watermark, Pseudo Noise Sequence, Spectral Masking Effect
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704563 Transform to Succeed: An Empirical Analysis of Digital Transformation in Firms
Authors: Sarah E. Stief, Anne Theresa Eidhoff, Markus Voeth
Abstract:
Despite all progress firms are facing the increasing need to adapt and assimilate digital technologies to transform their business activities in order to pursue business development. By using new digital technologies, firms can implement major business improvements in order to stay competitive and foster new growth potentials. The corresponding phenomenon of digital transformation has received some attention in previous literature in respect to industries such as media and publishing. Nevertheless, there is a lack of understanding of the concept and its organization within firms. With the help of twenty-three in-depth field interviews with German experts responsible for their company’s digital transformation, we examined what digital transformation encompasses, how it is organized and which opportunities and challenges arise within firms. Our results indicate that digital transformation is an inevitable task for all firms, as it bears the potential to comprehensively optimize and reshape established business activities and can thus be seen as a strategy of business development.Keywords: Business development, digitalization, digital strategies, digital transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6326562 Statistical Texture Analysis
Authors: G. N. Srinivasan, G. Shobha
Abstract:
This paper presents an overview of the methodologies and algorithms for statistical texture analysis of 2D images. Methods for digital-image texture analysis are reviewed based on available literature and research work either carried out or supervised by the authors.Keywords: Image Texture, Texture Analysis, Statistical Approaches, Structural approaches, spectral approaches, Morphological approaches, Fractals, Fourier Transforms, Gabor Filters, Wavelet transforms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938561 Comparison of Fricative Vocal Tract Transfer Functions Derived using Two Different Segmentation Techniques
Authors: K. S. Subari, C. H. Shadle, A. Barney, R. I. Damper
Abstract:
The acoustic and articulatory properties of fricative speech sounds are being studied using magnetic resonance imaging (MRI) and acoustic recordings from a single subject. Area functions were derived from a complete set of axial and coronal MR slices using two different methods: the Mermelstein technique and the Blum transform. Area functions derived from the two techniques were shown to differ significantly in some cases. Such differences will lead to different acoustic predictions and it is important to know which is the more accurate. The vocal tract acoustic transfer function (VTTF) was derived from these area functions for each fricative and compared with measured speech signals for the same fricative and same subject. The VTTFs for /f/ in two vowel contexts and the corresponding acoustic spectra are derived here; the Blum transform appears to show a better match between prediction and measurement than the Mermelstein technique.
Keywords: Area functions, fricatives, vocal tract transferfunction, MRI, speech.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652560 A Robust Redundant Residue Representation in Residue Number System with Moduli Set(rn-2,rn-1,rn)
Authors: Hossein Khademolhosseini, Mehdi Hosseinzadeh
Abstract:
The residue number system (RNS), due to its properties, is used in applications in which high performance computation is needed. The carry free nature, which makes the arithmetic, carry bounded as well as the paralleling facility is the reason of its capability of high speed rendering. Since carry is not propagated between the moduli in this system, the performance is only restricted by the speed of the operations in each modulus. In this paper a novel method of number representation by use of redundancy is suggested in which {rn- 2,rn-1,rn} is the reference moduli set where r=2k+1 and k =1, 2,3,.. This method achieves fast computations and conversions and makes the circuits of them much simpler.Keywords: Binary to RNS converter, Carry save adder, Computer arithmetic, Residue number system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1371559 Efficient Spectral Analysis of Quasi Stationary Time Series
Authors: Khalid M. Aamir, Mohammad A. Maud
Abstract:
Power Spectral Density (PSD) of quasi-stationary processes can be efficiently estimated using the short time Fourier series (STFT). In this paper, an algorithm has been proposed that computes the PSD of quasi-stationary process efficiently using offline autoregressive model order estimation algorithm, recursive parameter estimation technique and modified sliding window discrete Fourier Transform algorithm. The main difference in this algorithm and STFT is that the sliding window (SW) and window for spectral estimation (WSA) are separately defined. WSA is updated and its PSD is computed only when change in statistics is detected in the SW. The computational complexity of the proposed algorithm is found to be lesser than that for standard STFT technique.
Keywords: Power Spectral Density (PSD), quasi-stationarytime series, short time Fourier Transform, Sliding window DFT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967558 Enhancement of Low Contrast Satellite Images using Discrete Cosine Transform and Singular Value Decomposition
Authors: A. K. Bhandari, A. Kumar, P. K. Padhy
Abstract:
In this paper, a novel contrast enhancement technique for contrast enhancement of a low-contrast satellite image has been proposed based on the singular value decomposition (SVD) and discrete cosine transform (DCT). The singular value matrix represents the intensity information of the given image and any change on the singular values change the intensity of the input image. The proposed technique converts the image into the SVD-DCT domain and after normalizing the singular value matrix; the enhanced image is reconstructed by using inverse DCT. The visual and quantitative results suggest that the proposed SVD-DCT method clearly shows the increased efficiency and flexibility of the proposed method over the exiting methods such as Linear Contrast Stretching technique, GHE technique, DWT-SVD technique, DWT technique, Decorrelation Stretching technique, Gamma Correction method based techniques.Keywords: Singular Value Decomposition (SVD), discretecosine transforms (DCT), image equalization and satellite imagecontrast enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3838557 Modeling and Simulation of Robotic Arm Movement using Soft Computing
Authors: V. K. Banga, Jasjit Kaur, R. Kumar, Y. Singh
Abstract:
In this research paper we have presented control architecture for robotic arm movement and trajectory planning using Fuzzy Logic (FL) and Genetic Algorithms (GAs). This architecture is used to compensate the uncertainties like; movement, friction and settling time in robotic arm movement. The genetic algorithms and fuzzy logic is used to meet the objective of optimal control movement of robotic arm. This proposed technique represents a general model for redundant structures and may extend to other structures. Results show optimal angular movement of joints as result of evolutionary process. This technique has edge over the other techniques as minimum mathematics complexity used.Keywords: Kinematics, Genetic algorithms (GAs), Fuzzy logic(FL), Optimal control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3009556 A Fast HRRP Synthesis Algorithm with Sensing Dictionary in GTD Model
Authors: R. Fan, Q. Wan, H. Chen, Y.L. Liu, Y.P. Liu
Abstract:
In the paper, a fast high-resolution range profile synthetic algorithm called orthogonal matching pursuit with sensing dictionary (OMP-SD) is proposed. It formulates the traditional HRRP synthetic to be a sparse approximation problem over redundant dictionary. As it employs a priori that the synthetic range profile (SRP) of targets are sparse, SRP can be accomplished even in presence of data lost. Besides, the computation complexity decreases from O(MNDK) flops for OMP to O(M(N + D)K) flops for OMP-SD by introducing sensing dictionary (SD). Simulation experiments illustrate its advantages both in additive white Gaussian noise (AWGN) and noiseless situation, respectively.
Keywords: GTD-based model, HRRP, orthogonal matching pursuit, sensing dictionary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923555 The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy
Authors: W. Ketren, J. Wannapeera, Z. Heishun, A. Ryuichi, K. Toshiteru, M. Kouichi, O. Hideaki
Abstract:
Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.
Keywords: Biomass upgrading, liquid membrane-Fourier transform infrared spectroscopy, FTIR, degradative solvent extraction, mechanism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1021554 Synchrotron X-ray based Investigation of Fe and Zn Atoms in Tissue Samples at Different Growth Stages
Authors: Sunil Dehipawala, Todd Holden, E. Cheung, Robert Regan, P. Schneider, G. Tremberger Jr, D. Lieberman, T. Cheung
Abstract:
The zinc and iron environments in different growth stages have been studied with EXAFS and XANES with Brookhaven Synchrotron Light Source. Tissue samples included meat, organ, vegetable, leaf, and yeast. The project studied the EXAFS and XANES of tissue samples using Zn and Fe K-edges. Duck embryo samples show that brain and intestine would contain shorter EXFAS determined Zn-N/O bond; as with the cases of fresh yeast versus reconstituted live yeast and green leaf versus yellow leaf. The XANES Fourier transform characteristic-length would be useful as a functionality index for selected types of tissue samples in various physical states. The extension to the development of functional synchrotron imaging for tissue engineering application based on spectroscopic technique is discussed.Keywords: EXAFS, Fourier Transform, metalloproteins, XANES
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581553 Frequency Transformation with Pascal Matrix Equations
Authors: Phuoc Si Nguyen
Abstract:
Frequency transformation with Pascal matrix equations is a method for transforming an electronic filter (analogue or digital) into another filter. The technique is based on frequency transformation in the s-domain, bilinear z-transform with pre-warping frequency, inverse bilinear transformation and a very useful application of the Pascal’s triangle that simplifies computing and enables calculation by hand when transforming from one filter to another. This paper will introduce two methods to transform a filter into a digital filter: frequency transformation from the s-domain into the z-domain; and frequency transformation in the z-domain. Further, two Pascal matrix equations are derived: an analogue to digital filter Pascal matrix equation and a digital to digital filter Pascal matrix equation. These are used to design a desired digital filter from a given filter.Keywords: Frequency transformation, Bilinear z-transformation, Pre-warping frequency, Digital filters, Analog filters, Pascal’s triangle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915