Search results for: noise reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2333

Search results for: noise reduction

1973 Wavelet-Based ECG Signal Analysis and Classification

Authors: Madina Hamiane, May Hashim Ali

Abstract:

This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm.  The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.

Keywords: ECG Signal, QRS detection, thresholding, wavelet decomposition, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
1972 Invariant Characters of Tolerance Class and Reduction under Homomorphism in IIS

Authors: Chen Wu, Lijuan Wang

Abstract:

Some invariant properties of incomplete information systems homomorphism are studied in this paper. Demand conditions of tolerance class, attribute reduction, indispensable attribute and dispensable attribute being invariant under homomorphism in incomplete information system are revealed and discussed. The existing condition of endohomomorphism on an incomplete information system is also explored. It establishes some theoretical foundations for further investigations on incomplete information systems in rough set theory, like in information systems.

Keywords: Attribute reduction, homomorphism, incomplete information system, rough set, tolerance relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 746
1971 Fractional Masks Based On Generalized Fractional Differential Operator for Image Denoising

Authors: Hamid A. Jalab, Rabha W. Ibrahim

Abstract:

This paper introduces an image denoising algorithm based on generalized Srivastava-Owa fractional differential operator for removing Gaussian noise in digital images. The structures of nxn fractional masks are constructed by this algorithm. Experiments show that, the capability of the denoising algorithm by fractional differential-based approach appears efficient to smooth the Gaussian noisy images for different noisy levels. The denoising performance is measured by using peak signal to noise ratio (PSNR) for the denoising images. The results showed an improved performance (higher PSNR values) when compared with standard Gaussian smoothing filter.

Keywords: Fractional calculus, fractional differential operator, fractional mask, fractional filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3009
1970 Technical Support of Intracranial Single Unit Activity Measurement

Authors: Richard Grünes, Karel Roubik

Abstract:

The article deals with technical support of intracranial single unit activity measurement. The parameters of the whole measuring set were tested in order to assure the optimal conditions of extracellular single-unit recording. Metal microelectrodes for measuring the single-unit were tested during animal experiments. From signals recorded during these experiments, requirements for the measuring set parameters were defined. The impedance parameters of the metal microelectrodes were measured. The frequency-gain and autonomous noise properties of preamplifier and amplifier were verified. The measurement and the description of the extracellular single unit activity could help in prognoses of brain tissue damage recovery.

Keywords: Measuring set, metal microelectrodes, single-unit, noise, impedance parameters, gain characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
1969 Fuzzy Population-Based Meta-Heuristic Approaches for Attribute Reduction in Rough Set Theory

Authors: Mafarja Majdi, Salwani Abdullah, Najmeh S. Jaddi

Abstract:

One of the global combinatorial optimization problems in machine learning is feature selection. It concerned with removing the irrelevant, noisy, and redundant data, along with keeping the original meaning of the original data. Attribute reduction in rough set theory is an important feature selection method. Since attribute reduction is an NP-hard problem, it is necessary to investigate fast and effective approximate algorithms. In this paper, we proposed two feature selection mechanisms based on memetic algorithms (MAs) which combine the genetic algorithm with a fuzzy record to record travel algorithm and a fuzzy controlled great deluge algorithm, to identify a good balance between local search and genetic search. In order to verify the proposed approaches, numerical experiments are carried out on thirteen datasets. The results show that the MAs approaches are efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.

Keywords: Rough Set Theory, Attribute Reduction, Fuzzy Logic, Memetic Algorithms, Record to Record Algorithm, Great Deluge Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
1968 Classification of Non Stationary Signals Using Ben Wavelet and Artificial Neural Networks

Authors: Mohammed Benbrahim, Khalid Benjelloun, Aomar Ibenbrahim, Adil Daoudi

Abstract:

The automatic classification of non stationary signals is an important practical goal in several domains. An essential classification task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, we present a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "Ben wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.

Keywords: Seismic signals, Ben Wavelet, Dimensionality reduction, Artificial neural networks, Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
1967 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: Biometrics, identity verification, genetic data, k-nearest neighbor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1120
1966 Characteristics of the Storage Stability for Different Saccharomyces cerevisiae Strains

Authors: Gomaa N. Abdel-Rahman, Nadia R. A. Nassar, Yehia A. Heikal, Mahmoud A. M. Abou-Donia, Mohamed B. M. Ahmed, Mohamed Fadel

Abstract:

Storage stability is the important factor of baker's yeast quality. Effect of the storage period (fifteen days) on storage sugars and cell viability of baker's yeast, produced from three S. cerevisiae strains (FC-620, FH-620, and FAT-12) as comparison with baker's yeast produced by S. cerevisae F-707 (original strain of baker's yeast factory) were investigated. Studied trehalose and glycogen content ranged from 10.19 to 14.79 % and from 10.05 to 10.69 % (d.w.), respectively before storage. The trehalose and glycogen content of all strains was decreased by increasing the storage period with no significant differences between the reduction rates of trehalose. Meanwhile, reduction rates of glycogen had significant differences between different strains, where the FH-620 and FC-620 strains had lowest rates as 18.12 and 20.70 %, respectively. Also, total viable cells and gassing power of all strains were decreased by increasing the storage period. FH-620 and FC-620 strains had the lowest values of reduction rates as an indicator of storage resistant. Where the reduction rates in total viable cells of FH-620 and FC-620 strains were 22.05 and 24.70%, respectively, while the reduction rates of gassing power were 20.90 and 24.30%, in the same order. On other hand, FAT-12 strain was more sensitive to storage as compared to original strain, where the reduction rates were 35.60 and 35.75%, respectively for total viable cells and gassing power.

Keywords: Baker’s yeast, trehalose, glycogen, gassing power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
1965 Multiscale Blind Image Restoration with a New Method

Authors: Alireza Mallahzadeh, Hamid Dehghani, Iman Elyasi

Abstract:

A new method, based on the normal shrink and modified version of Katssagelous and Lay, is proposed for multiscale blind image restoration. The method deals with the noise and blur in the images. It is shown that the normal shrink gives the highest S/N (signal to noise ratio) for image denoising process. The multiscale blind image restoration is divided in two sections. The first part of this paper proposes normal shrink for image denoising and the second part of paper proposes modified version of katssagelous and Lay for blur estimation and the combination of both methods to reach a multiscale blind image restoration.

Keywords: Multiscale blind image restoration, image denoising, blur estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
1964 Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)

Authors: M.Heenaye-Mamode Khan, R.K. Subramanian, N. A. Mamode Khan

Abstract:

The quest of providing more secure identification system has led to a rise in developing biometric systems. Dorsal hand vein pattern is an emerging biometric which has attracted the attention of many researchers, of late. Different approaches have been used to extract the vein pattern and match them. In this work, Principle Component Analysis (PCA) which is a method that has been successfully applied on human faces and hand geometry is applied on the dorsal hand vein pattern. PCA has been used to obtain eigenveins which is a low dimensional representation of vein pattern features. Low cost CCD cameras were used to obtain the vein images. The extraction of the vein pattern was obtained by applying morphology. We have applied noise reduction filters to enhance the vein patterns. The system has been successfully tested on a database of 200 images using a threshold value of 0.9. The results obtained are encouraging.

Keywords: Biometric, Dorsal vein pattern, PCA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
1963 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: Customized thresholding, ECG signal, EMD, hard thresholding, Soft-thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088
1962 Evaluation of Low-Reducible Sinter in Blast Furnace Technology by Mathematical Model Developed at Centre ENET, VSB – Technical University of Ostrava

Authors: S. Jursová, P. Pustějovská, S. Brožová, J. Bilík

Abstract:

The paper deals with possibilities of interpretation of iron ore reducibility tests. It presents a mathematical model developed at Centre ENET, VŠB – Technical University of Ostrava, Czech Republic for an evaluation of metallurgical material of blast furnace feedstock such as iron ore, sinter or pellets. According to the data from the test, the model predicts its usage in blast furnace technology and its effects on production parameters of shaft aggregate. At the beginning, the paper sums up the general concept and experience in mathematical modelling of iron ore reduction. It presents basic equation for the calculation and the main parts of the developed model. In the experimental part, there is an example of usage of the mathematical model. The paper describes the usage of data for some predictive calculation. There are presented material, method of carried test of iron ore reducibility. Then there are graphically interpreted effects of used material on carbon consumption, rate of direct reduction and the whole reduction process.

Keywords: Blast furnace technology, iron ore reduction, mathematical model, prediction of iron ore reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1991
1961 Genetic Algorithm and Padé-Moment Matching for Model Order Reduction

Authors: Shilpi Lavania, Deepak Nagaria

Abstract:

A mixed method for model order reduction is presented in this paper. The denominator polynomial is derived by matching both Markov parameters and time moments, whereas numerator polynomial derivation and error minimization is done using Genetic Algorithm. The efficiency of the proposed method can be investigated in terms of closeness of the response of reduced order model with respect to that of higher order original model and a comparison of the integral square error as well.

Keywords: Model Order Reduction (MOR), control theory, Markov parameters, time moments, genetic algorithm, Single Input Single Output (SISO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3536
1960 DHT-LMS Algorithm for Sensorineural Loss Patients

Authors: Sunitha S. L., V. Udayashankara

Abstract:

Hearing impairment is the number one chronic disability affecting many people in the world. Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Hartley Transform Power Normalized Least Mean Square algorithm (DHT-LMS) to improve the SNR and to reduce the convergence rate of the Least Means Square (LMS) for sensorineural loss patients. The DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. It can be effectively used for noise cancellation with less convergence time. The simulated result shows the superior characteristics by improving the SNR at least 9 dB for input SNR with zero dB and faster convergence rate (eigenvalue ratio 12) compare to time domain method and DFT-LMS.

Keywords: Hearing Impairment, DHT-LMS, Convergence rate, SNR improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
1959 Validity Domains of Beams Behavioural Models: Efficiency and Reduction with Artificial Neural Networks

Authors: Keny Ordaz-Hernandez, Xavier Fischer, Fouad Bennis

Abstract:

In a particular case of behavioural model reduction by ANNs, a validity domain shortening has been found. In mechanics, as in other domains, the notion of validity domain allows the engineer to choose a valid model for a particular analysis or simulation. In the study of mechanical behaviour for a cantilever beam (using linear and non-linear models), Multi-Layer Perceptron (MLP) Backpropagation (BP) networks have been applied as model reduction technique. This reduced model is constructed to be more efficient than the non-reduced model. Within a less extended domain, the ANN reduced model estimates correctly the non-linear response, with a lower computational cost. It has been found that the neural network model is not able to approximate the linear behaviour while it does approximate the non-linear behaviour very well. The details of the case are provided with an example of the cantilever beam behaviour modelling.

Keywords: artificial neural network, validity domain, cantileverbeam, non-linear behaviour, model reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
1958 EPR Hiding in Medical Images for Telemedicine

Authors: K. A. Navas, S. Archana Thampy, M. Sasikumar

Abstract:

Medical image data hiding has strict constrains such as high imperceptibility, high capacity and high robustness. Achieving these three requirements simultaneously is highly cumbersome. Some works have been reported in the literature on data hiding, watermarking and stegnography which are suitable for telemedicine applications. None is reliable in all aspects. Electronic Patient Report (EPR) data hiding for telemedicine demand it blind and reversible. This paper proposes a novel approach to blind reversible data hiding based on integer wavelet transform. Experimental results shows that this scheme outperforms the prior arts in terms of zero BER (Bit Error Rate), higher PSNR (Peak Signal to Noise Ratio), and large EPR data embedding capacity with WPSNR (Weighted Peak Signal to Noise Ratio) around 53 dB, compared with the existing reversible data hiding schemes.

Keywords: Biomedical imaging, Data security, Datacommunication, Teleconferencing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2758
1957 Audio Watermarking Using Spectral Modifications

Authors: Jyotsna Singh, Parul Garg, Alok Nath De

Abstract:

In this paper, we present a non-blind technique of adding the watermark to the Fourier spectral components of audio signal in a way such that the modified amplitude does not exceed the maximum amplitude spread (MAS). This MAS is due to individual Discrete fourier transform (DFT) coefficients in that particular frame, which is derived from the Energy Spreading function given by Schroeder. Using this technique one can store double the information within a given frame length i.e. overriding the watermark on the host of equal length with least perceptual distortion. The watermark is uniformly floating on the DFT components of original signal. This helps in detecting any intentional manipulations done on the watermarked audio. Also, the scheme is found robust to various signal processing attacks like presence of multiple watermarks, Additive white gaussian noise (AWGN) and mp3 compression.

Keywords: Discrete Fourier Transform, Spreading Function, Watermark, Pseudo Noise Sequence, Spectral Masking Effect

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
1956 Improved Approximation to the Derivative of a Digital Signal Using Wavelet Transforms for Crosstalk Analysis

Authors: S. P. Kozaitis, R. L. Kriner

Abstract:

The information revealed by derivatives can help to better characterize digital near-end crosstalk signatures with the ultimate goal of identifying the specific aggressor signal. Unfortunately, derivatives tend to be very sensitive to even low levels of noise. In this work we approximated the derivatives of both quiet and noisy digital signals using a wavelet-based technique. The results are presented for Gaussian digital edges, IBIS Model digital edges, and digital edges in oscilloscope data captured from an actual printed circuit board. Tradeoffs between accuracy and noise immunity are presented. The results show that the wavelet technique can produce first derivative approximations that are accurate to within 5% or better, even under noisy conditions. The wavelet technique can be used to calculate the derivative of a digital signal edge when conventional methods fail.

Keywords: digital signals, electronics, IBIS model, printedcircuit board, wavelets

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878
1955 Performance Evaluation of an ANC-based Hybrid Algorithm for Multi-target Wideband Active Sonar Echolocation System

Authors: Jason Chien-Hsun Tseng

Abstract:

This paper evaluates performances of an adaptive noise cancelling (ANC) based target detection algorithm on a set of real test data supported by the Defense Evaluation Research Agency (DERA UK) for multi-target wideband active sonar echolocation system. The hybrid algorithm proposed is a combination of an adaptive ANC neuro-fuzzy scheme in the first instance and followed by an iterative optimum target motion estimation (TME) scheme. The neuro-fuzzy scheme is based on the adaptive noise cancelling concept with the core processor of ANFIS (adaptive neuro-fuzzy inference system) to provide an effective fine tuned signal. The resultant output is then sent as an input to the optimum TME scheme composed of twogauge trimmed-mean (TM) levelization, discrete wavelet denoising (WDeN), and optimal continuous wavelet transform (CWT) for further denosing and targets identification. Its aim is to recover the contact signals in an effective and efficient manner and then determine the Doppler motion (radial range, velocity and acceleration) at very low signal-to-noise ratio (SNR). Quantitative results have shown that the hybrid algorithm have excellent performance in predicting targets- Doppler motion within various target strength with the maximum false detection of 1.5%.

Keywords: Wideband Active Sonar Echolocation, ANC Neuro-Fuzzy, Wavelet Denoise, CWT, Hybrid Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2061
1954 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 874
1953 Salient Points Reduction for Content-Based Image Retrieval

Authors: Yao-Hong Tsai

Abstract:

Salient points are frequently used to represent local properties of the image in content-based image retrieval. In this paper, we present a reduction algorithm that extracts the local most salient points such that they not only give a satisfying representation of an image, but also make the image retrieval process efficiently. This algorithm recursively reduces the continuous point set by their corresponding saliency values under a top-down approach. The resulting salient points are evaluated with an image retrieval system using Hausdoff distance. In this experiment, it shows that our method is robust and the extracted salient points provide better retrieval performance comparing with other point detectors.

Keywords: Barnard detector, Content-based image retrieval, Points reduction, Salient point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
1952 The Effects of Yield and Yield Components of Some Quality Increase Applications on Ismailoglu Grape Type in Turkey

Authors: Yaşar Önal, Aydın Akın

Abstract:

This study was conducted Ismailoglu grape type (Vitis vinifera L.) and its vine which was aged 15 was grown on its own root in a vegetation period of 2013 in Nevşehir province in Turkey. In this research, it was investigated whether the applications of Control (C), 1/3 cluster tip reduction (1/3 CTR), shoot tip reduction (STR), 1/3 CTR + STR, TKI-HUMAS (TKI-HM) (Soil) (S), TKIHM (Foliar) (F), TKI-HM (S + F), 1/3 CTR + TKI-HM (S), 1/3 CTR + TKI-HM (F), 1/3 CTR + TKI-HM (S+F), STR + TKI-HM (S), STR + TKI-HM (F), STR + TKI-HM (S + F), 1/3 CTR + STR+TKI-HM (S), 1/3 CTR + STR + TKI-HM (F), 1/3 CTR + STR + TKI-HM (S + F) on yield and yield components of Ismailoglu grape type. The results were obtained as the highest fresh grape yield (16.15 kg/vine) with TKI-HM (S), as the highest cluster weight (652.39 g) with 1/3 CTR + STR, as the highest 100 berry weight (419.07 g) with 1/3 CTR + STR + TKI-HM (F), as the highest maturity index (44.06) with 1/3 CTR, as the highest must yield (810.00 ml) with STR + TKI-HM (F), as the highest intensity of L* color (42.04) with TKIHM (S + F), as the highest intensity of a* color (2.60) with 1/3 CTR + TKI-HM (S), as the highest intensity of b* color (7.16) with 1/3 CTR + TKI-HM (S) applications. To increase the fresh grape yield of Ismailoglu grape type can be recommended TKI-HM (S) application.

Keywords: 1/3 cluster tip reduction, shoot tip reduction, TKIHumas application, yield and yield Components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
1951 On Pseudo-Random and Orthogonal Binary Spreading Sequences

Authors: Abhijit Mitra

Abstract:

Different pseudo-random or pseudo-noise (PN) as well as orthogonal sequences that can be used as spreading codes for code division multiple access (CDMA) cellular networks or can be used for encrypting speech signals to reduce the residual intelligence are investigated. We briefly review the theoretical background for direct sequence CDMA systems and describe the main characteristics of the maximal length, Gold, Barker, and Kasami sequences. We also discuss about variable- and fixed-length orthogonal codes like Walsh- Hadamard codes. The equivalence of PN and orthogonal codes are also derived. Finally, a new PN sequence is proposed which is shown to have certain better properties than the existing codes.

Keywords: Code division multiple access, pseudo-noise codes, maximal length, Gold, Barker, Kasami, Walsh-Hadamard, autocorrelation, crosscorrelation, figure of merit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6055
1950 Order Reduction of Linear Dynamic Systems using Stability Equation Method and GA

Authors: G. Parmar, R. Prasad, S. Mukherjee

Abstract:

The authors present an algorithm for order reduction of linear dynamic systems using the combined advantages of stability equation method and the error minimization by Genetic algorithm. The denominator of the reduced order model is obtained by the stability equation method and the numerator terms of the lower order transfer function are determined by minimizing the integral square error between the transient responses of original and reduced order models using Genetic algorithm. The reduction procedure is simple and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. The proposed algorithm has also been extended for the order reduction of linear multivariable systems. Two numerical examples are solved to illustrate the superiority of the algorithm over some existing ones including one example of multivariable system.

Keywords: Genetic algorithm, Integral square error, Orderreduction, Stability equation method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3192
1949 Papain Immobilized Polyurethane Film as Antimicrobial Food Package

Authors: M. Cynthya, V. Prabhawathi, D. Mukesh

Abstract:

Food contamination occurs during post process handling. This leads to spoilage and growth of pathogenic microorganisms in the food, thereby reducing its shelf life or spreading of food borne diseases. Several methods are tried and one of which is use of antimicrobial packaging. Here, papain, a protease enzyme, is covalently immobilized with the help of glutarldehyde on polyurethane and used as a food wrap to protect food from microbial contamination. Covalent immobilization of papain was achieved at a pH of 7.4; temperature of 4°C; glutaraldehyde concentration of 0.5%; incubation time of 24h; and 50mg of papain. The formation of -C=Nobserved in the Fourier transform infrared spectrum confirmed the immobilization of the enzyme on the polymer. Immobilized enzyme retained higher activity than the native free enzyme. The modified polyurethane showed better reduction of Staphylococcus aureus biofilm than bare polymer film (eight folds reduction in live colonies, two times reduction in protein and 6 times reduction in carbohydrates). The efficacy of this was studied by wrapping it over S. aureus contaminated cottage cheese (paneer) and cheese and stored at a temperature of 4°C for 7days. The modified film reduced the bacterial contamination by eight folds when compared to the bare film. FTIR also indicated reduction in lipids, sugars and proteins in the biofilm.

Keywords: Cheese, Papain, polyurethane, Staphylococcus aureus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2952
1948 System Reduction by Eigen Permutation Algorithm and Improved Pade Approximations

Authors: Jay Singh, Kalyan Chatterjee, C. B. Vishwakarma

Abstract:

A mixed method by combining a Eigen algorithm and improved pade approximations is proposed for reducing the order of the large-scale dynamic systems. The most dominant Eigen value of both original and reduced order systems remain same in this method. The proposed method guarantees stability of the reduced model if the original high-order system is stable and is comparable in quality with the other well known existing order reduction methods. The superiority of the proposed method is shown through examples taken from the literature.

Keywords: Eigen algorithm, Order reduction, improved pade approximations, Stability, Transfer function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095
1947 A Technique for Improving the Performance of Median Smoothers at the Corners Characterized by Low Order Polynomials

Authors: E. Srinivasan, D. Ebenezer

Abstract:

Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.

Keywords: Image filtering, detail preservation, median filters, nonlinear filters, order statistics filtering, Rank order filtering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
1946 Limitation Imposed by Polarization-Dependent Loss on a Fiber Optic Communication System

Authors: Farhan Hussain, M.S.Islam

Abstract:

Analytically the effect of polarization dependent loss on a high speed fiber optic communication link has been investigated. PDL and the signal's incoming state of polarization (SOP) have a significant co-relation between them and their various combinations produces different effects on the system behavior which has been inspected. Pauli's spin operator and PDL parameters are combined together to observe the attenuation effect induced by PDL in a link containing multiple PDL elements. It is found that in the presence of PDL the Q-factor and BER at the receiver undergoes fluctuation causing the system to be unstable and results show that it is mainly due to optical-signal-to-parallel-noise ratio (OSNItpar) that these parameters fluctuate. Generally the Q-factor, BER deteriorates as the value of average PDL in the link increases except for depolarized light for which the system parameters improves when PDL increases.

Keywords: Bit Error Rate (BER), Optical-signal-to-noise ratio (OSNR), Polarization-dependent loss (PDL), State of polarization (SOP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1727
1945 An Adaptive ARQ – HARQ Method with Two RS Codes

Authors: Michal Martinovič, Jaroslav Polec, Kvetoslava Kotuliaková

Abstract:

In this paper we proposed multistage adaptive ARQ/HARQ/HARQ scheme. This method combines pure ARQ (Automatic Repeat reQuest) mode in low channel bit error rate and hybrid ARQ method using two different Reed-Solomon codes in middle and high error rate conditions. It follows, that our scheme has three stages. The main goal is to increase number of states in adaptive HARQ methods and be able to achieve maximum throughput for every channel bit error rate. We will prove the proposal by calculation and then with simulations in land mobile satellite channel environment. Optimization of scheme system parameters is described in order to maximize the throughput in the whole defined Signal-to- Noise Ratio (SNR) range in selected channel environment.

Keywords: Signal-to-noise ratio, throughput, forward error correction (FEC), pure and hybrid automatic repeat request (ARQ).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
1944 Inverse Dynamic Active Ground Motion Acceleration Inputs Estimation of the Retaining Structure

Authors: Ming-Hui Lee, Iau-Teh Wang

Abstract:

The innovative fuzzy estimator is used to estimate the ground motion acceleration of the retaining structure in this study. The Kalman filter without the input term and the fuzzy weighting recursive least square estimator are two main portions of this method. The innovation vector can be produced by the Kalman filter, and be applied to the fuzzy weighting recursive least square estimator to estimate the acceleration input over time. The excellent performance of this estimator is demonstrated by comparing it with the use of difference weighting function, the distinct levels of the measurement noise covariance and the initial process noise covariance. The availability and the precision of the proposed method proposed in this study can be verified by comparing the actual value and the one obtained by numerical simulation.

Keywords: Earthquake, Fuzzy Estimator, Kalman Filter, Recursive Least Square Estimator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549