Search results for: Noise Filter
736 Reduction of Multiple User Interference for Optical CDMA Systems Using Successive Interference Cancellation Scheme
Authors: Tawfig Eltaif, Hesham A. Bakarman, N. Alsowaidi, M. R. Mokhtar, Malek Harbawi
Abstract:
Multiple User Interference (MUI) considers the primary problem in Optical Code-Division Multiple Access (OCDMA), which resulting from the overlapping among the users. In this article we aim to mitigate this problem by studying an interference cancellation scheme called successive interference cancellation (SIC) scheme. This scheme will be tested on two different detection schemes, spectral amplitude coding (SAC) and direct detection systems (DS), using partial modified prime (PMP) as the signature codes. It was found that SIC scheme based on both SAC and DS methods had a potential to suppress the intensity noise, that is to say it can mitigate MUI noise. Furthermore, SIC/DS scheme showed much lower bit error rate (BER) performance relative to SIC/SAC scheme for different magnitude of effective power. Hence, many more users can be supported by SIC/DS receiver system.Keywords: Multiple User Interference (MUI), Optical Code-Division Multiple Access (OCDMA), Partial Modified Prime Code (PMP), Spectral Amplitude Coding (SAC), Successive Interference Cancellation (SIC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731735 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel
Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi
Abstract:
Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.Keywords: AWGN, BER, DCT, Fading, MAP, UEP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678734 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise
Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn
Abstract:
In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.
Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3419733 Current Drainage Attack Correction via Adjusting the Attacking Saw Function Asymmetry
Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap
Abstract:
Current drainage attack suggested previously is further studied in regular settings of closed-loop controlled Brushless DC (BLDC) motor with Kalman filter in the feedback loop. Modeling and simulation experiments are conducted in a MATLAB environment, implementing the closed-loop control model of BLDC motor operation in position sensorless mode under Kalman filter drive. The current increase in the motor windings is caused by the controller (p-controller in our case) affected by false data injection of substitution of the angular velocity estimates with distorted values. Operation of multiplication to distortion coefficient, values of which are taken from the distortion function synchronized in its periodicity with the rotor’s position change. A saw function with a triangular tooth shape is studied herewith for the purpose of carrying out the bias injection with current drainage consequences. The specific focus here is on how the asymmetry of the tooth in the saw function affects the flow of current drainage. The purpose is two-fold: (i) to produce and collect the signature of an asymmetric saw in the attack for further pattern recognition process, and (ii) to determine conditions of improving stealthiness of such attack via regulating asymmetry in saw function used. It is found that modification of the symmetry in the saw tooth affects the periodicity of current drainage modulation. Specifically, the modulation frequency of the drained current for a fully asymmetric tooth shape coincides with the saw function modulation frequency itself. Increasing the symmetry parameter for the triangle tooth shape leads to an increase in the modulation frequency for the drained current. Moreover, such frequency reaches the switching frequency of the motor windings for fully symmetric triangular shapes, thus becoming undetectable and improving the stealthiness of the attack. Therefore, the collected signatures of the attack can serve for attack parameter identification via the pattern recognition route.
Keywords: Bias injection attack, Kalman filter, BLDC motor, control system, closed loop, P-controller, PID-controller, current drainage, saw-function, asymmetry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155732 Improved Root-Mean-Square-Gain-Combining for SIMO Channels
Authors: Rania Minkara, Jean-Pierre Dubois
Abstract:
The major problem that wireless communication systems undergo is multipath fading caused by scattering of the transmitted signal. However, we can treat multipath propagation as multiple channels between the transmitter and receiver to improve the signal-to-scattering-noise ratio. While using Single Input Multiple Output (SIMO) systems, the diversity receivers extract multiple signal branches or copies of the same signal received from different channels and apply gain combining schemes such as Root Mean Square Gain Combining (RMSGC). RMSGC asymptotically yields an identical performance to that of the theoretically optimal Maximum Ratio Combining (MRC) for values of mean Signal-to- Noise-Ratio (SNR) above a certain threshold value without the need for SNR estimation. This paper introduces an improvement of RMSGC using two different issues. We found that post-detection and de-noising the received signals improve the performance of RMSGC and lower the threshold SNR.Keywords: Bit error rate, de-noising, pre-detection, root-meansquare gain combining, single-input multiple-output channels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339731 Enhancement of Stereo Video Pairs Using SDNs To Aid In 3D Reconstruction
Authors: Lewis E. Hibell, Honghai Liu, David J. Brown
Abstract:
This paper presents the results of enhancing images from a left and right stereo pair in order to increase the resolution of a 3D representation of a scene generated from that same pair. A new neural network structure known as a Self Delaying Dynamic Network (SDN) has been used to perform the enhancement. The advantage of SDNs over existing techniques such as bicubic interpolation is their ability to cope with motion and noise effects. SDNs are used to generate two high resolution images, one based on frames taken from the left view of the subject, and one based on the frames from the right. This new high resolution stereo pair is then processed by a disparity map generator. The disparity map generated is compared to two other disparity maps generated from the same scene. The first is a map generated from an original high resolution stereo pair and the second is a map generated using a stereo pair which has been enhanced using bicubic interpolation. The maps generated using the SDN enhanced pairs match more closely the target maps. The addition of extra noise into the input images is less problematic for the SDN system which is still able to out perform bicubic interpolation.
Keywords: Genetic Evolution, Image Enhancement, Neuron Networks, Stereo Vision
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424730 Low Cost Real Time Robust Identification of Impulsive Signals
Authors: R. Biondi, G. Dys, G. Ferone, T. Renard, M. Zysman
Abstract:
This paper describes an automated implementable system for impulsive signals detection and recognition. The system uses a Digital Signal Processing device for the detection and identification process. Here the system analyses the signals in real time in order to produce a particular response if needed. The system analyses the signals in real time in order to produce a specific output if needed. Detection is achieved through normalizing the inputs and comparing the read signals to a dynamic threshold and thus avoiding detections linked to loud or fluctuating environing noise. Identification is done through neuronal network algorithms. As a setup our system can receive signals to “learn” certain patterns. Through “learning” the system can recognize signals faster, inducing flexibility to new patterns similar to those known. Sound is captured through a simple jack input, and could be changed for an enhanced recording surface such as a wide-area recorder. Furthermore a communication module can be added to the apparatus to send alerts to another interface if needed.
Keywords: Sound Detection, Impulsive Signal, Background Noise, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2334729 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
Authors: Qianhua He, Weili Zhou, Aiwu Chen
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.
Keywords: Speech denoising, sparse representation, K-singular value decomposition, orthogonal matching pursuit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014728 Moving Area Filter to Detect Object in Video Sequence from Moving Platform
Authors: Sallama Athab, Hala Bahjat
Abstract:
Detecting object in video sequence is a challenging mission for identifying, tracking moving objects. Background removal considered as a basic step in detected moving objects tasks. Dual static cameras placed in front and rear moving platform gathered information which is used to detect objects. Background change regarding with speed and direction moving platform, so moving objects distinguished become complicated. In this paper, we propose framework allows detection moving object with variety of speed and direction dynamically. Object detection technique built on two levels the first level apply background removal and edge detection to generate moving areas. The second level apply Moving Areas Filter (MAF) then calculate Correlation Score (CS) for adjusted moving area. Merging moving areas with closer CS and marked as moving object. Experiment result is prepared on real scene acquired by dual static cameras without overlap in sense. Results showing accuracy in detecting objects compared with optical flow and Mixture Module Gaussian (MMG), Accurate ratio produced to measure accurate detection moving object.
Keywords: Background Removal, Correlation, Mixture Module Gaussian, Moving Platform, Object Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2120727 Rapid Frequency Response Measurement of Power Conversion Products with Coherence-Based Confidence Analysis
Authors: Tomi Roinila, Aki Taskinen, Matti Vilkko
Abstract:
Switched-mode converters play now a significant role in modern society. Their operation are often crucial in various electrical applications affecting the every day life. Therefore, the quality of the converters needs to be reliably verified. Recent studies have shown that the converters can be fully characterized by a set of frequency responses which can be efficiently used to validate the proper operation of the converters. Consequently, several methods have been proposed to measure the frequency responses fast and accurately. Most often correlation-based techniques have been applied. The presented measurement methods are highly sensitive to external errors and system nonlinearities. This fact has been often forgotten and the necessary uncertainty analysis of the measured responses has been neglected. This paper presents a simple approach to analyze the noise and nonlinearities in the frequency-response measurements of switched-mode converters. Coherence analysis is applied to form a confidence interval characterizing the noise and nonlinearities involved in the measurements. The presented method is verified by practical measurements from a high-frequency switchedmode converter.Keywords: Switched-mode converters, Frequency analysis, CoherenceAnalysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719726 SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis
Authors: Dakshina Ranjan Kisku, Hunny Mehrotra, Jamuna Kanta Sing, Phalguni Gupta
Abstract:
Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Keywords: Biometrics, Multiview face Recognition, Gaborwavelets, LDA, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503725 Multi-Objective Optimization of Electric Discharge Machining for Inconel 718
Authors: Pushpendra S. Bharti, S. Maheshwari
Abstract:
Electric discharge machining (EDM) is one of the most widely used non-conventional manufacturing process to shape difficult-to-cut materials. The process yield, in terms of material removal rate, surface roughness and tool wear rate, of EDM may considerably be improved by selecting the optimal combination(s) of process parameters. This paper employs Multi-response signal-to-noise (MRSN) ratio technique to find the optimal combination(s) of the process parameters during EDM of Inconel 718. Three cases v.i.z. high cutting efficiency, high surface finish, and normal machining have been taken and the optimal combinations of input parameters have been obtained for each case. Analysis of variance (ANOVA) has been employed to find the dominant parameter(s) in all three cases. The experimental verification of the obtained results has also been made. MRSN ratio technique found to be a simple and effective multi-objective optimization technique.
Keywords: EDM, material removal rate, multi-response signal-to-noise ratio, optimization, surface roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1197724 Quality Estimation of Video Transmitted overan Additive WGN Channel based on Digital Watermarking and Wavelet Transform
Authors: Mohamed S. El-Mahallawy, Attalah Hashad, Hazem Hassan Ali, Heba Sami Zaky
Abstract:
This paper presents an evaluation for a wavelet-based digital watermarking technique used in estimating the quality of video sequences transmitted over Additive White Gaussian Noise (AWGN) channel in terms of a classical objective metric, such as Peak Signal-to-Noise Ratio (PSNR) without the need of the original video. In this method, a watermark is embedded into the Discrete Wavelet Transform (DWT) domain of the original video frames using a quantization method. The degradation of the extracted watermark can be used to estimate the video quality in terms of PSNR with good accuracy. We calculated PSNR for video frames contaminated with AWGN and compared the values with those estimated using the Watermarking-DWT based approach. It is found that the calculated and estimated quality measures of the video frames are highly correlated, suggesting that this method can provide a good quality measure for video frames transmitted over AWGN channel without the need of the original video.Keywords: AWGN, DWT, PSNR, Watermarking, VideoQuality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836723 A Nano-Scaled SRAM Guard Band Design with Gaussian Mixtures Model of Complex Long Tail RTN Distributions
Authors: Worawit Somha, Hiroyuki Yamauchi
Abstract:
This paper proposes, for the first time, how the challenges facing the guard-band designs including the margin assist-circuits scheme for the screening-test in the coming process generations should be addressed. The increased screening error impacts are discussed based on the proposed statistical analysis models. It has been shown that the yield-loss caused by the misjudgment on the screening test would become 5-orders of magnitude larger than that for the conventional one when the amplitude of random telegraph noise (RTN) caused variations approaches to that of random dopant fluctuation. Three fitting methods to approximate the RTN caused complex Gamma mixtures distributions by the simple Gaussian mixtures model (GMM) are proposed and compared. It has been verified that the proposed methods can reduce the error of the fail-bit predictions by 4-orders of magnitude.Keywords: Mixtures of Gaussian, Random telegraph noise, EM algorithm, Long-tail distribution, Fail-bit analysis, Static random access memory, Guard band design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841722 Environmental Capacity and Sustainability of European Regional Airports: A Case Study
Authors: Nicola Gualandi, Luca Mantecchini, Davide Serrau
Abstract:
Airport capacity has always been perceived in the traditional sense as the number of aircraft operations during a specified time corresponding to a tolerable level of average delay and it mostly depends on the airside characteristics, on the fleet mix variability and on the ATM. The adoption of the Directive 2002/30/EC in the EU countries drives the stakeholders to conceive airport capacity in a different way though. Airport capacity in this sense is fundamentally driven by environmental criteria, and since acoustical externalities represent the most important factors, those are the ones that could pose a serious threat to the growth of airports and to aviation market itself in the short-medium term. The importance of the regional airports in the deregulated market grew fast during the last decade since they represent spokes for network carriers and a preferential destination for low-fares carriers. Not only regional airports have witnessed a fast and unexpected growth in traffic but also a fast growth in the complaints for the nuisance by the people living near those airports. In this paper the results of a study conducted in cooperation with the airport of Bologna G. Marconi are presented in order to investigate airport acoustical capacity as a defacto constraint of airport growth.Keywords: Airport acoustical capacity, airport noise, air traffic noise, sustainability of regional airports.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657721 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely, where the subspace of the covariance matrix is decomposed to separate the signal subspace from noise subspace. The decomposition is normally done by using either the eigenvalue decomposition (EVD) or the singular value decomposition (SVD) of the auto-correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. This paper considers the estimation of the multipath slow frequency hopping (FH) channel using noise space based method. In particular, an efficient method is proposed to estimate the multipath time delays by applying multiple signal classification (MUSIC) algorithm which is based on the null space extracted by the rank revealing LU (RRLU) factorization. As a result, precise information is provided by the RRLU about the numerical null space and the rank, (i.e., important tool in linear algebra). The simulation results demonstrate the effectiveness of the proposed novel method by approximately decreasing the computational complexity to the half as compared with RRQR methods keeping the same performance.
Keywords: Time Delay Estimation, RRLU, RRQR, MUSIC, LS-ESPRIT, LS-ESPRIT, Frequency Hopping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044720 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording
Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy
Abstract:
Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329719 A Local Invariant Generalized Hough Transform Method for Integrated Circuit Visual Positioning
Authors: Fei Long Wei, Hua Yang, Hai Tao Zhang, Zhou Ping Yin
Abstract:
In this study, an local invariant generalized Houghtransform (LI-GHT) method is proposed for integrated circuit (IC) visual positioning. The original generalized Hough transform (GHT) is robust to external noise; however, it is not suitable for visual positioning of IC chips due to the four-dimensionality (4D) of parameter space which leads to the substantial storage requirement and high computational complexity. The proposed LI-GHT method can reduce the dimensionality of parameter space to 2D thanks to the rotational invariance of local invariant geometric feature and it can estimate the accuracy position and rotation angle of IC chips in real-time under noise and blur influence. The experiment results show that the proposed LI-GHT can estimate position and rotation angle of IC chips with high accuracy and fast speed. The proposed LI-GHT algorithm was implemented in IC visual positioning system of radio frequency identification (RFID) packaging equipment.
Keywords: Integrated Circuit Visual Positioning, Generalized Hough Transform, Local invariant Generalized Hough Transform, ICpacking equipment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2208718 A Copyright Protection Scheme for Color Images using Secret Sharing and Wavelet Transform
Authors: Shang-Lin Hsieh, Lung-Yao Hsu, I-Ju Tsai
Abstract:
This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.
Keywords: Color image, copyright protection, discrete wavelet transform, secret sharing, watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842717 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients
Authors: Mbainaibeye Jérôme, Noureddine Ellouze
Abstract:
Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Keywords: Image compression, wavelet transform, sign coding, magnitude coding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671716 Extracting Single Trial Visual Evoked Potentials using Selective Eigen-Rate Principal Components
Authors: Samraj Andrews, Ramaswamy Palaniappan, Nidal Kamel
Abstract:
In single trial analysis, when using Principal Component Analysis (PCA) to extract Visual Evoked Potential (VEP) signals, the selection of principal components (PCs) is an important issue. We propose a new method here that selects only the appropriate PCs. We denote the method as selective eigen-rate (SER). In the method, the VEP is reconstructed based on the rate of the eigen-values of the PCs. When this technique is applied on emulated VEP signals added with background electroencephalogram (EEG), with a focus on extracting the evoked P3 parameter, it is found to be feasible. The improvement in signal to noise ratio (SNR) is superior to two other existing methods of PC selection: Kaiser (KSR) and Residual Power (RP). Though another PC selection method, Spectral Power Ratio (SPR) gives a comparable SNR with high noise factors (i.e. EEGs), SER give more impressive results in such cases. Next, we applied SER method to real VEP signals to analyse the P3 responses for matched and non-matched stimuli. The P3 parameters extracted through our proposed SER method showed higher P3 response for matched stimulus, which confirms to the existing neuroscience knowledge. Single trial PCA using KSR and RP methods failed to indicate any difference for the stimuli.Keywords: Electroencephalogram, P3, Single trial VEP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641715 SVM-Based Detection of SAR Images in Partially Developed Speckle Noise
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of SAR (synthetic aperture radar) images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to real SAR images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected SAR images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (the detection hypotheses) in the original images.Keywords: Least Square-Support Vector Machine, SyntheticAperture Radar. Partially Developed Speckle, Multi-Look Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537714 Evaluation of Features Extraction Algorithms for a Real-Time Isolated Word Recognition System
Authors: Tomyslav Sledevič, Artūras Serackis, Gintautas Tamulevičius, Dalius Navakauskas
Abstract:
Paper presents an comparative evaluation of features extraction algorithm for a real-time isolated word recognition system based on FPGA. The Mel-frequency cepstral, linear frequency cepstral, linear predictive and their cepstral coefficients were implemented in hardware/software design. The proposed system was investigated in speaker dependent mode for 100 different Lithuanian words. The robustness of features extraction algorithms was tested recognizing the speech records at different signal to noise rates. The experiments on clean records show highest accuracy for Mel-frequency cepstral and linear frequency cepstral coefficients. For records with 15 dB signal to noise rate the linear predictive cepstral coefficients gives best result. The hard and soft part of the system is clocked on 50 MHz and 100 MHz accordingly. For the classification purpose the pipelined dynamic time warping core was implemented. The proposed word recognition system satisfy the real-time requirements and is suitable for applications in embedded systems.
Keywords: Isolated word recognition, features extraction, MFCC, LFCC, LPCC, LPC, FPGA, DTW.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3540713 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance
Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian
Abstract:
Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.Keywords: Identification, Hammerstein-Wiener, optimization, quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799712 Simulation for Squat Exercise of an Active Controlled Vibration Isolation and Stabilization System for Astronaut’s Exercise Platform
Authors: Ziraguen O. Williams, Shield B. Lin, Fouad N. Matari, Leslie J. Quiocho
Abstract:
In a task to assist NASA in analyzing the dynamic forces caused by operational countermeasures of an astronaut’s exercise platform impacting the spacecraft, feedback delay and signal noise were added to a simulation model of an active controlled vibration isolation and stabilization system to regulate the movement of the exercise platform. Two additional simulation tools used in this study were Trick and MBDyn, software simulation environments developed at the NASA Johnson Space Center. Simulation results obtained from these three tools were very similar. All simulation results support the hypothesis that an active controlled vibration isolation and stabilization system outperforms a passive controlled system even with the addition of feedback delay and signal noise to the active controlled system. In this paper, squat exercise was used in creating excited force to the simulation model. The exciter force from squat exercise was calculated from motion capture of an exerciser. The simulation results demonstrate much greater transmitted force reduction in the active controlled system than the passive controlled system.
Keywords: Astronaut, counterweight, stabilization, vibration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 461711 Multi-Objective Optimization Contingent on Subcarrier-Wise Beamforming for Multiuser MIMO-OFDM Interference Channels
Authors: R. Vedhapriya Vadhana, Ruba Soundar, K. G. Jothi Shalini
Abstract:
We address the problem of interference over all the channels in multiuser MIMO-OFDM systems. This paper contributes three beamforming strategies designed for multiuser multiple-input and multiple-output by way of orthogonal frequency division multiplexing, in which the transmit and receive beamformers are acquired repetitious by secure-form stages. In the principal case, the transmit (TX) beamformers remain fixed then the receive (RX) beamformers are computed. This eradicates one interference span for every user by means of extruding the transmit beamformers into a null space of relevant channels. Formerly, by gratifying the orthogonality condition to exclude the residual interferences in RX beamformer for every user is done by maximizing the signal-to-noise ratio (SNR). The second case comprises mutually optimizing the TX and RX beamformers from controlled SNR maximization. The outcomes of first case is used here. The third case also includes combined optimization of TX-RX beamformers; however, uses the both controlled SNR and signal-to-interference-plus-noise ratio maximization (SINR). By the standardized channel model for IEEE 802.11n, the proposed simulation experiments offer rapid beamforming and enhanced error performance.Keywords: Beamforming, interference channels, MIMO-OFDM, multi-objective optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126710 Neural Networks-Based Acoustic Annoyance Model for Laptop Hard Disk Drive
Authors: Yi Chao Ma, Cheng Siong Chin, Wai Lok Woo
Abstract:
Since the last decade, there has been a rapid growth in digital multimedia, such as high-resolution media files and threedimentional movies. Hence, there is a need for large digital storage such as Hard Disk Drive (HDD). As such, users expect to have a quieter HDD in their laptop. In this paper, a jury test has been conducted on a group of 34 people where 17 of them are students who are the potential consumer, and the remaining are engineers who know the HDD. A total 13 HDD sound samples have been selected from over hundred HDD noise recordings. These samples are selected based on an agreed subjective feeling. The samples are played to the participants using head acoustic playback system, which enabled them to experience as similar as possible the same environment as have been recorded. Analysis has been conducted and the obtained results have indicated different group has different perception over the noises. Two neural network-based acoustic annoyance models are established based on back propagation neural network. Four psychoacoustic metrics, loudness, sharpness, roughness and fluctuation strength, are used as the input of the model, and the subjective evaluation results are taken as the output. The developed models are reasonably accurate in simulating both training and test samples.Keywords: Hard disk drive noise, jury test, neural network model, psychoacoustic annoyance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533709 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795708 BIDENS: Iterative Density Based Biclustering Algorithm With Application to Gene Expression Analysis
Authors: Mohamed A. Mahfouz, M. A. Ismail
Abstract:
Biclustering is a very useful data mining technique for identifying patterns where different genes are co-related based on a subset of conditions in gene expression analysis. Association rules mining is an efficient approach to achieve biclustering as in BIMODULE algorithm but it is sensitive to the value given to its input parameters and the discretization procedure used in the preprocessing step, also when noise is present, classical association rules miners discover multiple small fragments of the true bicluster, but miss the true bicluster itself. This paper formally presents a generalized noise tolerant bicluster model, termed as μBicluster. An iterative algorithm termed as BIDENS based on the proposed model is introduced that can discover a set of k possibly overlapping biclusters simultaneously. Our model uses a more flexible method to partition the dimensions to preserve meaningful and significant biclusters. The proposed algorithm allows discovering biclusters that hard to be discovered by BIMODULE. Experimental study on yeast, human gene expression data and several artificial datasets shows that our algorithm offers substantial improvements over several previously proposed biclustering algorithms.Keywords: Machine learning, biclustering, bi-dimensional clustering, gene expression analysis, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963707 ANN Based Currency Recognition System using Compressed Gray Scale and Application for Sri Lankan Currency Notes - SLCRec
Authors: D. A. K. S. Gunaratna, N. D. Kodikara, H. L. Premaratne
Abstract:
Automatic currency note recognition invariably depends on the currency note characteristics of a particular country and the extraction of features directly affects the recognition ability. Sri Lanka has not been involved in any kind of research or implementation of this kind. The proposed system “SLCRec" comes up with a solution focusing on minimizing false rejection of notes. Sri Lankan currency notes undergo severe changes in image quality in usage. Hence a special linear transformation function is adapted to wipe out noise patterns from backgrounds without affecting the notes- characteristic images and re-appear images of interest. The transformation maps the original gray scale range into a smaller range of 0 to 125. Applying Edge detection after the transformation provided better robustness for noise and fair representation of edges for new and old damaged notes. A three layer back propagation neural network is presented with the number of edges detected in row order of the notes and classification is accepted in four classes of interest which are 100, 500, 1000 and 2000 rupee notes. The experiments showed good classification results and proved that the proposed methodology has the capability of separating classes properly in varying image conditions.Keywords: Artificial intelligence, linear transformation and pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2833