Search results for: MAP Decoding
66 Impact of the Decoder Connection Schemes on Iterative Decoding of GPCB Codes
Authors: Fouad Ayoub, Mohammed Lahmer, Mostafa Belkasmi, El Houssine Bouyakhf
Abstract:
In this paper we present a study of the impact of connection schemes on the performance of iterative decoding of Generalized Parallel Concatenated block (GPCB) constructed from one step majority logic decodable (OSMLD) codes and we propose a new connection scheme for decoding them. All iterative decoding connection schemes use a soft-input soft-output threshold decoding algorithm as a component decoder. Numerical result for GPCB codes transmitted over Additive White Gaussian Noise (AWGN) channel are provided. It will show that the proposed scheme is better than Hagenauer-s scheme and Lucas-s scheme [1] and slightly better than the Pyndiah-s scheme.
Keywords: Generalized parallel concatenated block codes, OSMLD codes, threshold decoding, iterative decoding scheme, and performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174565 Odor Discrimination Using Neural Decoding of Olfactory Bulbs in Rats
Authors: K.-J. You, H.J. Lee, Y. Lang, C. Im, C.S. Koh, H.-C. Shin
Abstract:
This paper presents a novel method for inferring the odor based on neural activities observed from rats- main olfactory bulbs. Multi-channel extra-cellular single unit recordings were done by micro-wire electrodes (tungsten, 50μm, 32 channels) implanted in the mitral/tufted cell layers of the main olfactory bulb of anesthetized rats to obtain neural responses to various odors. Neural response as a key feature was measured by substraction of neural firing rate before stimulus from after. For odor inference, we have developed a decoding method based on the maximum likelihood (ML) estimation. The results have shown that the average decoding accuracy is about 100.0%, 96.0%, 84.0%, and 100.0% with four rats, respectively. This work has profound implications for a novel brain-machine interface system for odor inference.Keywords: biomedical signal processing, neural engineering, olfactory, neural decoding, BMI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161364 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.
Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24663 Advances in Artificial Intelligence Using Speech Recognition
Authors: Khaled M. Alhawiti
Abstract:
This research study aims to present a retrospective study about speech recognition systems and artificial intelligence. Speech recognition has become one of the widely used technologies, as it offers great opportunity to interact and communicate with automated machines. Precisely, it can be affirmed that speech recognition facilitates its users and helps them to perform their daily routine tasks, in a more convenient and effective manner. This research intends to present the illustration of recent technological advancements, which are associated with artificial intelligence. Recent researches have revealed the fact that speech recognition is found to be the utmost issue, which affects the decoding of speech. In order to overcome these issues, different statistical models were developed by the researchers. Some of the most prominent statistical models include acoustic model (AM), language model (LM), lexicon model, and hidden Markov models (HMM). The research will help in understanding all of these statistical models of speech recognition. Researchers have also formulated different decoding methods, which are being utilized for realistic decoding tasks and constrained artificial languages. These decoding methods include pattern recognition, acoustic phonetic, and artificial intelligence. It has been recognized that artificial intelligence is the most efficient and reliable methods, which are being used in speech recognition.Keywords: Speech recognition, acoustic phonetic, artificial intelligence, Hidden Markov Models (HMM), statistical models of speech recognition, human machine performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797862 Network Coding with Buffer Scheme in Multicast for Broadband Wireless Network
Authors: Gunasekaran Raja, Ramkumar Jayaraman, Rajakumar Arul, Kottilingam Kottursamy
Abstract:
Broadband Wireless Network (BWN) is the promising technology nowadays due to the increased number of smartphones. Buffering scheme using network coding considers the reliability and proper degree distribution in Worldwide interoperability for Microwave Access (WiMAX) multi-hop network. Using network coding, a secure way of transmission is performed which helps in improving throughput and reduces the packet loss in the multicast network. At the outset, improved network coding is proposed in multicast wireless mesh network. Considering the problem of performance overhead, degree distribution makes a decision while performing buffer in the encoding / decoding process. Consequently, BuS (Buffer Scheme) based on network coding is proposed in the multi-hop network. Here the encoding process introduces buffer for temporary storage to transmit packets with proper degree distribution. The simulation results depend on the number of packets received in the encoding/decoding with proper degree distribution using buffering scheme.
Keywords: Encoding and decoding, buffer, network coding, degree distribution, broadband wireless networks, multicast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174061 A Guide to the Implementation of Ambisonics Super Stereo
Authors: Alessio Mastrorillo, Giuseppe Silvi, Francesco Scagliola
Abstract:
This paper explores the decoding of Ambisonics material into 2-channel mixing formats, addressing challenges related to stereo speakers and headphones. We present the Universal HJ (UHJ) format as a solution, enabling the preservation of the entire horizontal plane and offering versatile spatial audio experiences. Our paper presents a UHJ format decoder, explaining its design, computational aspects, and empirical optimization. We discuss the advantages of UHJ decoding, potential applications, and its significance in music composition. Additionally, we highlight the integration of this decoder within the Envelop for Live (E4L) suite.
Keywords: Ambisonics, UHJ, quadrature filter, virtual reality, Gerzon, decoder, stereo, binaural, biquad.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19860 Performance of Block Codes Using the Eigenstructure of the Code Correlation Matrixand Soft-Decision Decoding of BPSK
Authors: Vitalice K. Oduol, C. Ardil
Abstract:
A method is presented for obtaining the error probability for block codes. The method is based on the eigenvalueeigenvector properties of the code correlation matrix. It is found that under a unary transformation and for an additive white Gaussian noise environment, the performance evaluation of a block code becomes a one-dimensional problem in which only one eigenvalue and its corresponding eigenvector are needed in the computation. The obtained error rate results show remarkable agreement between simulations and analysis.
Keywords: bit error rate, block codes, code correlation matrix, eigenstructure, soft-decision decoding, weight vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178159 Parallel Joint Channel Coding and Cryptography
Authors: Nataša Živić, Christoph Ruland
Abstract:
Method of Parallel Joint Channel Coding and Cryptography has been analyzed and simulated in this paper. The method is an extension of Soft Input Decryption with feedback, which is used for improvement of channel decoding of secured messages. Parallel Joint Channel Coding and Cryptography results in improved coding gain of channel decoding, which achieves more than 2 dB. Such results are an implication of a combination of receiver components and their interoperability.Keywords: Block length, Coding gain, Feedback, L-values, Parallel Joint Channel Coding and Cryptography, Soft Input Decryption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158458 Support Vector Machine based Intelligent Watermark Decoding for Anticipated Attack
Authors: Syed Fahad Tahir, Asifullah Khan, Abdul Majid, Anwar M. Mirza
Abstract:
In this paper, we present an innovative scheme of blindly extracting message bits from an image distorted by an attack. Support Vector Machine (SVM) is used to nonlinearly classify the bits of the embedded message. Traditionally, a hard decoder is used with the assumption that the underlying modeling of the Discrete Cosine Transform (DCT) coefficients does not appreciably change. In case of an attack, the distribution of the image coefficients is heavily altered. The distribution of the sufficient statistics at the receiving end corresponding to the antipodal signals overlap and a simple hard decoder fails to classify them properly. We are considering message retrieval of antipodal signal as a binary classification problem. Machine learning techniques like SVM is used to retrieve the message, when certain specific class of attacks is most probable. In order to validate SVM based decoding scheme, we have taken Gaussian noise as a test case. We generate a data set using 125 images and 25 different keys. Polynomial kernel of SVM has achieved 100 percent accuracy on test data.Keywords: Bit Correct Ratio (BCR), Grid Search, Intelligent Decoding, Jackknife Technique, Support Vector Machine (SVM), Watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166957 A Novel Receiver Algorithm for Coherent Underwater Acoustic Communications
Authors: Liang Zhao, Jianhua Ge
Abstract:
In this paper, we proposed a novel receiver algorithm for coherent underwater acoustic communications. The proposed receiver is composed of three parts: (1) Doppler tracking and correction, (2) Time reversal channel estimation and combining, and (3) Joint iterative equalization and decoding (JIED). To reduce computational complexity and optimize the equalization algorithm, Time reversal (TR) channel estimation and combining is adopted to simplify multi-channel adaptive decision feedback equalizer (ADFE) into single channel ADFE without reducing the system performance. Simultaneously, the turbo theory is adopted to form joint iterative ADFE and convolutional decoder (JIED). In JIED scheme, the ADFE and decoder exchange soft information in an iterative manner, which can enhance the equalizer performance using decoding gain. The simulation results show that the proposed algorithm can reduce computational complexity and improve the performance of equalizer. Therefore, the performance of coherent underwater acoustic communications can be improved greatly.Keywords: Underwater acoustic communication, Time reversal (TR) combining, joint iterative equalization and decoding (JIED)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172256 The Use of Software and Internet Search Engines to Develop the Encoding and Decoding Skills of a Dyslexic Learner: A Case Study
Authors: Rabih Joseph Nabhan
Abstract:
This case study explores the impact of two major computer software programs Learn to Speak English and Learn English Spelling and Pronunciation, and some Internet search engines such as Google on mending the decoding and spelling deficiency of Simon X, a dyslexic student. The improvement in decoding and spelling may result in better reading comprehension and composition writing. Some computer programs and Internet materials can help regain the missing awareness and consequently restore his self-confidence and self-esteem. In addition, this study provides a systematic plan comprising a set of activities (four computer programs and Internet materials) which address the problem from the lowest to the highest levels of phoneme and phonological awareness. Four methods of data collection (accounts, observations, published tests, and interviews) create the triangulation to validly and reliably collect data before the plan, during the plan, and after the plan. The data collected are analyzed quantitatively and qualitatively. Sometimes the analysis is either quantitative or qualitative, and some other times a combination of both. Tables and figures are utilized to provide a clear and uncomplicated illustration of some data. The improvement in the decoding, spelling, reading comprehension, and composition writing skills that occurred is proved through the use of authentic materials performed by the student under study. Such materials are a comparison between two sample passages written by the learner before and after the plan, a genuine computer chat conversation, and the scores of the academic year that followed the execution of the plan. Based on these results, the researcher recommends further studies on other Lebanese dyslexic learners using the computer to mend their language problem in order to design and make a most reliable software program that can address this disability more efficiently and successfully.
Keywords: Analysis, awareness, dyslexic, software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64555 Equalization Algorithms for MIMO System
Authors: Said Elkassimi, Said Safi, B. Manaut
Abstract:
In recent years, multi-antenna techniques are being considered as a potential solution to increase the flow of future wireless communication systems. The objective of this article is to study the emission and reception system MIMO (Multiple Input Multiple Output), and present the different reception decoding techniques. First we will present the least complex technical, linear receivers such as the zero forcing equalizer (ZF) and minimum mean squared error (MMSE). Then a nonlinear technique called ordered successive cancellation of interferences (OSIC) and the optimal detector based on the maximum likelihood criterion (ML), finally, we simulate the associated decoding algorithms for MIMO system such as ZF, MMSE, OSIC and ML, thus a comparison of performance of these algorithms in MIMO context.
Keywords: Multiple Input Multiple Outputs (MIMO), ZF, MMSE, Ordered Interference Successive Cancellation (OSIC), ML, Interference Successive Cancellation (SIC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 282854 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector
Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy
Abstract:
In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.
Keywords: 4-quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency, ATmega 32 microcontrollers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153353 Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations
Authors: Insah Bhurtah, P. Clarel Catherine, K. M. Sunjiv Soyjaudah
Abstract:
The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Keywords: Error-correcting codes, information theory, low-density parity-check codes, sum-product algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170752 Shot Transition Detection with Minimal Decoding of MPEG Video Streams
Authors: Mona A. Fouad, Fatma M. Bayoumi, Hoda M. Onsi, Mohamed G. Darwish
Abstract:
Digital libraries become more and more necessary in order to support users with powerful and easy-to-use tools for searching, browsing and retrieving media information. The starting point for these tasks is the segmentation of video content into shots. To segment MPEG video streams into shots, a fully automatic procedure to detect both abrupt and gradual transitions (dissolve and fade-groups) with minimal decoding in real time is developed in this study. Each was explored through two phases: macro-block type's analysis in B-frames, and on-demand intensity information analysis. The experimental results show remarkable performance in detecting gradual transitions of some kinds of input data and comparable results of the rest of the examined video streams. Almost all abrupt transitions could be detected with very few false positive alarms.Keywords: Adaptive threshold, abrupt transitions, gradual transitions, MPEG video streams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155751 Metaheuristic Algorithms for Decoding Binary Linear Codes
Authors: Hassan Berbia, Faissal Elbouanani, Rahal Romadi, Mostafa Belkasmi
Abstract:
This paper introduces two decoders for binary linear codes based on Metaheuristics. The first one uses a genetic algorithm and the second is based on a combination genetic algorithm with a feed forward neural network. The decoder based on the genetic algorithms (DAG) applied to BCH and convolutional codes give good performances compared to Chase-2 and Viterbi algorithm respectively and reach the performances of the OSD-3 for some Residue Quadratic (RQ) codes. This algorithm is less complex for linear block codes of large block length; furthermore their performances can be improved by tuning the decoder-s parameters, in particular the number of individuals by population and the number of generations. In the second algorithm, the search space, in contrast to DAG which was limited to the code word space, now covers the whole binary vector space. It tries to elude a great number of coding operations by using a neural network. This reduces greatly the complexity of the decoder while maintaining comparable performances.Keywords: Block code, decoding, methaheuristic, genetic algorithm, neural network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207950 Self-Supervised Pretraining on Paired Sequences of fMRI Data for Transfer Learning to Brain Decoding Tasks
Authors: Sean Paulsen, Michael Casey
Abstract:
In this work, we present a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening. Our pretraining results are the first to suggest a synergistic effect of multitask training on fMRI data. Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task. We observe significantly improved accuracy on held-out runs with the finetuned models, which demonstrates the ability of our pretraining tasks to facilitate transfer learning. This work contributes to the growing body of literature on transformer architectures for pretraining and transfer learning with fMRI data, and serves as a proof of concept for our pretraining tasks and multitask pretraining on fMRI data.
Keywords: Transfer learning, fMRI, self-supervised, brain decoding, transformer, multitask training.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15149 SySRA: A System of a Continuous Speech Recognition in Arab Language
Authors: Samir Abdelhamid, Noureddine Bouguechal
Abstract:
We report in this paper the model adopted by our system of continuous speech recognition in Arab language SySRA and the results obtained until now. This system uses the database Arabdic-10 which is a corpus of word for the Arab language and which was manually segmented. Phonetic decoding is represented by an expert system where the knowledge base is translated in the form of production rules. This expert system transforms a vocal signal into a phonetic lattice. The higher level of the system takes care of the recognition of the lattice thus obtained by deferring it in the form of written sentences (orthographical Form). This level contains initially the lexical analyzer which is not other than the module of recognition. We subjected this analyzer to a set of spectrograms obtained by dictating a score of sentences in Arab language. The rate of recognition of these sentences is about 70% which is, to our knowledge, the best result for the recognition of the Arab language. The test set consists of twenty sentences from four speakers not having taken part in the training.Keywords: Continuous speech recognition, lexical analyzer, phonetic decoding, phonetic lattice, vocal signal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138848 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel
Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi
Abstract:
Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.Keywords: AWGN, BER, DCT, Fading, MAP, UEP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167847 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code
Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare
Abstract:
Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.Keywords: Concatenated coding, low–density parity–check codes, array code, error floors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 99346 Codebook Generation for Vector Quantization on Orthogonal Polynomials based Transform Coding
Authors: R. Krishnamoorthi, N. Kannan
Abstract:
In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.
Keywords: Orthogonal Polynomials, Image Coding, Vector Quantization, TSVQ, Binary Tree Classifier
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 214845 A Low Power SRAM Base on Novel Word-Line Decoding
Authors: Arash Azizi Mazreah, Mohammad T. Manzuri Shalmani, Hamid Barati, Ali Barati, Ali Sarchami
Abstract:
This paper proposes a low power SRAM based on five transistor SRAM cell. Proposed SRAM uses novel word-line decoding such that, during read/write operation, only selected cell connected to bit-line whereas, in conventional SRAM (CV-SRAM), all cells in selected row connected to their bit-lines, which in turn develops differential voltages across all bit-lines, and this makes energy consumption on unselected bit-lines. In proposed SRAM memory array divided into two halves and this causes data-line capacitance to reduce. Also proposed SRAM uses one bit-line and thus has lower bit-line leakage compared to CV-SRAM. Furthermore, the proposed SRAM incurs no area overhead, and has comparable read/write performance versus the CV-SRAM. Simulation results in standard 0.25μm CMOS technology shows in worst case proposed SRAM has 80% smaller dynamic energy consumption in each cycle compared to CV-SRAM. Besides, energy consumption in each cycle of proposed SRAM and CV-SRAM investigated analytically, the results of which are in good agreement with the simulation results.Keywords: SRAM, write Operation, read Operation, capacitances, dynamic energy consumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 264744 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding
Authors: K. Anitha Sheela, J. Tarun Kumar
Abstract:
HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204743 Multi-VSS Scheme by Shifting Random Grids
Authors: Joy Jo-Yi Chang, Justie Su-Tzu Juan
Abstract:
Visual secret sharing (VSS) was proposed by Naor and Shamir in 1995. Visual secret sharing schemes encode a secret image into two or more share images, and single share image can’t obtain any information about the secret image. When superimposes the shares, it can restore the secret by human vision. Due to the traditional VSS have some problems like pixel expansion and the cost of sophisticated. And this method only can encode one secret image. The schemes of encrypting more secret images by random grids into two shares were proposed by Chen et al. in 2008. But when those restored secret images have much distortion, those schemes are almost limited in decoding. In the other words, if there is too much distortion, we can’t encrypt too much information. So, if we can adjust distortion to very small, we can encrypt more secret images. In this paper, four new algorithms which based on Chang et al.’s scheme be held in 2010 are proposed. First algorithm can adjust distortion to very small. Second algorithm distributes the distortion into two restored secret images. Third algorithm achieves no distortion for special secret images. Fourth algorithm encrypts three secret images, which not only retain the advantage of VSS but also improve on the problems of decoding.
Keywords: Visual cryptography, visual secret sharing, random grids, multiple, secret image sharing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152642 Turbo-Coded Mobile Terrestrial Communication Systems in Urban and Suburban Areas for Wireless Multimedia Applications
Authors: F. Mehran
Abstract:
With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Keywords: Mobile communications, Turbo codes, wireless multimedia communication systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159541 Pontrjagin Duality and Codes over Finite Commutative Rings
Authors: Khalid Abdelmoumen, Mustapha Najmeddine, Hussain Ben-Azza
Abstract:
We present linear codes over finite commutative rings which are not necessarily Frobenius. We treat the notion of syndrome decoding by using Pontrjagin duality. We also give a version of Delsarte-s theorem over rings relating trace codes and subring subcodes.Keywords: Codes, Finite Rings, Pontrjagin Duality, Trace Codes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181240 Simulation of Hamming Coding and Decoding for Microcontroller Radiation Hardening
Authors: Rehab I. Abdul Rahman, Mazhar B. Tayel
Abstract:
This paper presents a method of hardening the 8051 micro-controller, able to assure reliable operation in the presence of bit flips caused by radiation. Aiming at avoiding such faults in the 8051 micro-controller, Hamming code protection was used in its SRAM memory and registers. A VHDL code has been used for this hamming code protection.
Keywords: Radiation, hardening, bitflip, hamming code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 298439 A New Hardware Implementation of Manchester Line Decoder
Authors: Ibrahim A. Khorwat, Nabil Naas
Abstract:
In this paper, we present a simple circuit for Manchester decoding and without using any complicated or programmable devices. This circuit can decode 90kbps of transmitted encoded data; however, greater than this transmission rate can be decoded if high speed devices were used. We also present a new method for extracting the embedded clock from Manchester data in order to use it for serial-to-parallel conversion. All of our experimental measurements have been done using simulation.Keywords: High threshold level, level segregation, lowthreshold level, smoothing circuit synchronization..
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378338 A Multi-Signature Scheme based on Coding Theory
Authors: Mohammed Meziani, Pierre-Louis Cayrel
Abstract:
In this paper we propose two first non-generic constructions of multisignature scheme based on coding theory. The first system make use of the CFS signature scheme and is secure in random oracle while the second scheme is based on the KKS construction and is a few times. The security of our construction relies on a difficult problems in coding theory: The Syndrome Decoding problem which has been proved NP-complete [4].Keywords: Post-quantum cryptography, Coding-based cryptography, Digital signature, Multisignature scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187937 Encrypted Audio Communication Based On Synchronized Unified Chaotic Systems
Authors: C. Cruz-Hernández, E. Inzunza-González, R.M. López-Gutiérrez H. Serrano-Guerrero, E.E.García-Guerrero
Abstract:
In this paper, encrypted audio communications based on synchronization of coupled unified chaotic systems in master-slave configuration is numerically studied. We transmit the encrypted audio messages by using two unsecure channels. Encoding, transmission, and decoding audio messages in chaotic communication is presented.
Keywords: Audio encrypted, chaos, synchronization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698