首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A probabilistic hard-decision decoding algorithm based on the weight distribution of binary block codes and the random error distribution in the channel is briefly described. It reduces the number of look-up iterations performed in the conventional exhaustive search table look-up minimum distance decoder.  相似文献   

2.
An iterative trellis search technique is described for the maximum-likelihood (ML) soft decision decoding of block codes. The proposed technique derives its motivation from the fact that a given block code may be a subcode for a parent code whose associated trellis has substantially fewer edges. Through the use of list-Viterbi (1967) decoding and an iterative algorithm, the proposed technique allows for the use of a trellis for the parent code in the ML decoding of the desired subcode. Complexity and performance analyses, as well as details of potential implementations, indicate a substantial reduction in decoding complexity for linear block codes of practical length while achieving ML or near-ML soft decision performance  相似文献   

3.
The A* algorithm is applied to maximum-likelihood soft-decision decoding of binary linear block codes. This paper gives a tutorial on the A* algorithm, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained for several codes  相似文献   

4.
To decode a long block code with a large minimum distance by maximum likelihood decoding is practically impossible because the decoding complexity is simply enormous. However, if a code can be decomposed into constituent codes with smaller dimensions and simpler structure, it is possible to devise a practical and yet efficient scheme to decode the code. This paper investigates a class of decomposable codes, their distance and structural properties. It is shown that this class includes several classes of well-known and efficient codes as subclasses. Several methods for constructing decomposable codes or decomposing codes are presented. A two-stage (soft-decision or hard-decision) decoding scheme for decomposable codes, their translates or unions of translates is devised, and its error performance is analyzed for an AWGN channel. The two-stage soft-decision decoding is suboptimum. Error performances of some specific decomposable codes based on the proposed two-stage soft-decision decoding are evaluated. It is shown that the proposed two-stage suboptimum decoding scheme provides an excellent trade-off between the error performance and decoding complexity for codes of moderate and long block length  相似文献   

5.
BEAST is a bidirectional efficient algorithm for searching trees. In this correspondence, BEAST is extended to maximum-likelihood (ML) decoding of block codes obtained via convolutional codes. First it is shown by simulations that the decoding complexity of BEAST is significantly less than that of the Viterbi algorithm. Then asymptotic upper bounds on the BEAST decoding complexity for three important ensembles of codes are derived. They verify BEAST's high efficiency compared to other algorithms. For high rates, the new asymptotic bound for the best ensemble is in fact better than previously known bounds.  相似文献   

6.
An efficient soft-input soft-output iterative decoding algorithm for block turbo codes (BTCs) is proposed. The proposed algorithm utilizes Kaneko's (1994) decoding algorithm for soft-input hard-output decoding. These hard outputs are converted to soft-decisions using reliability calculations. Three different schemes for reliability calculations incorporating different levels of approximation are suggested. The algorithm proposed here presents a major advantage over existing decoding algorithms for BTCs by providing ample flexibility in terms of performance-complexity tradeoff. This makes the algorithm well suited for wireless multimedia applications. The algorithm can be used for optimal as well as suboptimal decoding. The suboptimal versions of the algorithm can be developed by changing a single parameter (the number of error patterns to be generated). For any performance, the computational complexity of the proposed algorithm is less than the computational complexity of similar existing algorithms. Simulation results for the decoding algorithm for different two-dimensional BTCs over an additive white Gaussian noise channel are shown. A performance comparison of the proposed algorithm with similar existing algorithms is also presented  相似文献   

7.
Near-optimum decoding of product codes: block turbo codes   总被引:2,自引:0,他引:2  
This paper describes an iterative decoding algorithm for any product code built using linear block codes. It is based on soft-input/soft-output decoders for decoding the component codes so that near-optimum performance is obtained at each iteration. This soft-input/soft-output decoder is a Chase decoder which delivers soft outputs instead of binary decisions. The soft output of the decoder is an estimation of the log-likelihood ratio (LLR) of the binary decisions given by the Chase decoder. The theoretical justifications of this algorithm are developed and the method used for computing the soft output is fully described. The iterative decoding of product codes is also known as the block turbo code (BTC) because the concept is quite similar to turbo codes based on iterative decoding of concatenated recursive convolutional codes. The performance of different Bose-Chaudhuri-Hocquenghem (BCH)-BTCs are given for the Gaussian and the Rayleigh channel. Performance on the Gaussian channel indicates that data transmission at 0.8 dB of Shannon's limit or more than 98% (R/C>0.98) of channel capacity can be achieved with high-code-rate BTC using only four iterations. For the Rayleigh channel, the slope of the bit-error rate (BER) curve is as steep as for the Gaussian channel without using channel state information  相似文献   

8.
《Electronics letters》2003,39(20):1453-1455
Iterative decoding of space-time block codes (STBCs) and channel codes have shown to achieve very good performance. Using tools such as EXIT charts, the performance can be predicted and also used for design. However almost all work so far has concentrated on linear STBCs. Therefore nonlinear STBCs that are designed to work well with iterative decoding are investigated. It is shown that they can outperform linear STBCs since their characteristics are quite different when supplying a priori information from a channel decoder. Analysis, design rules and simulation results for the nonlinear codes are presented.  相似文献   

9.
A sphere decoder searches for the closest lattice point within a certain search radius. The search radius provides a tradeoff between performance and complexity. We focus on analyzing the performance of sphere decoding of linear block codes. We analyze the performance of soft-decision sphere decoding on AWGN channels and a variety of modulation schemes. A hard-decision sphere decoder is a bounded distance decoder with the corresponding decoding radius. We analyze the performance of hard-decision sphere decoding on binary and q-ary symmetric channels. An upper bound on the performance of maximum-likelihood decoding of linear codes defined over Fq (e.g. Reed- Solomon codes) and transmitted over q-ary symmetric channels is derived and used in the analysis.We then discuss sphere decoding of general block codes or lattices with arbitrary modulation schemes. The tradeoff between the performance and complexity of a sphere decoder is then discussed.  相似文献   

10.
A method of soft-decision decoding of a block code was considered as a solution of the problem of estimating “soft” values of information symbols by the maximum-likelihood method. Estimation of values of information symbols was conducted under conditions where levels of continuous output signals of a discriminator (demodulator) were subjected to an appropriate interpretation with due regard for the coding rule.  相似文献   

11.
Space-time codes can be decoded by the sphere decoding (SD) algorithm to reduce the complexity and retain maximum-likelihood (ML) performance. In this letter, the ML metric of quasi-orthogonal space-time block codes is written into two independent Euclidean norms, thus SD can be applied to each function independently. The new scheme reduces the complexity by at least 85% for systems with four or more transmit antennas, compared with the conventional SD algorithm.  相似文献   

12.
By the use of abstract Fourier analysis on groups, the optimum mean-square-error decoding rule is developed for a fixed block code. The optimum one-to-one coding role to be used with this optimum decoder is derived, and a procedure for simultaneous optimization over both encoding and decoding rules is given. It is shown that there is a linear encoding rule which is optimum. A system which implements the optimum decoding rule is outlined. The major difference between this work and others involving coding for mean-square error is that the decoding rule developed here is a mapping from binaryn-tuples directly into the real numbers with the optimization being over all possible mappings into the real numbers. As such the system developed here replaces both the error-correction and digital-to-analog conversion components used in most numerical data transmission systems.  相似文献   

13.
One-step majority-logic decoding is one of the simplest algorithms for decoding cyclic block codes. However, it is an effective decoding scheme for very few codes. This paper presents a generalization based on the “common-symbol decoding problem.” Suppose one is given M (possibly corrupted) codewords from M (possibly different) codes over the same field; suppose further that the codewords share a single symbol in common. The common-symbol decoding problem is that of estimating the symbol in the common position. This is equivalent to one-step majority logic decoding when each of the “constituent” codes is a simple parity check. This paper formulates conditions under which this decoding is possible and presents a simple algorithm that accomplishes the same. When applied to decoding cyclic block codes, this technique yields a decoder structure ideal for parallel implementation. Furthermore, this approach frequently results in a decoder capable of correcting more errors than one-step majority-logic decoding. To demonstrate the simplicity of the resulting decoders, an example is presented  相似文献   

14.
In this letter, the SNR value at which the error performance curve of a soft decision maximum likelihood decoder reaches the slope corresponding to the code minimum distance is determined for a random code. Based on this value, referred to as the critical point, new insight about soft bounded distance decoding of random-like codes (and particularly Reed-Solomon codes) is provided.  相似文献   

15.
Lee  L.H.C. Lee  L.W. 《Electronics letters》1994,30(14):1120-1121
A novel decoding technique for linear block codes with coherent BPSK signals is proposed. The new system has the same error performance as and similar complexity to the conventional trellis decoding of block codes. Like the scarce-state-transition Viterbi decoding of convolutional codes, the proposed system is also well suited for CMOS VLSI implementation and has a lower power consumption  相似文献   

16.
Iterative decoding of binary block and convolutional codes   总被引:35,自引:0,他引:35  
Iterative decoding of two-dimensional systematic convolutional codes has been termed “turbo” (de)coding. Using log-likelihood algebra, we show that any decoder can be used which accepts soft inputs-including a priori values-and delivers soft outputs that can be split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the log-likelihood domain are given not only for convolutional codes but also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 2/3. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of 10-4 the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations  相似文献   

17.
Trellis structures of block codes are discussed. L-section trellis structures of some BCH codes are presented. A fast maximum likelihood decoding algorithm for BCH codes is proposed correspondingly, the decoding problem of q-ary images of qm-ary block codes is also discussed. The direct-sum partition and the associated decoding algorithms are given for the images.  相似文献   

18.
A new proof is presented for the existence of block codes whose error probability under maximum likelihood decoding is bounded asymptotically by the random coding bound universally over all discrete memoryless channels. On the basis of this result, the existence of convolutional codes with universally optimum performance is shown. Furthermore the existence of block codes which attain the expurgated bound universally over all discrete memoryless channels is proved under the use of maximum likelihood decoding.  相似文献   

19.
This work introduces a novel approach to increase the performance of block turbo codes (BTCs). The idea is based on using a Hamming threshold to limit the search for the maximum-likelihood (ML) codeword within only those codewords that lie within this threshold. The proposed iterative decoding approach is shown to offer both significant coding gain and complexity reduction over the standard iterative decoding methods.  相似文献   

20.
《Microelectronics Reliability》2014,54(11):2645-2648
The interest in using advanced Error Correction Codes (ECCs) to protect memories and caches is growing. This is because as process technology downscales, errors are more frequent and also tend to affect multiple bits. For SRAM memories and caches, latency is a limiting factor and ECCs have to provide low decoding times that can in most cases be only achieved with the use of a parallel decoder. One important issue with parallel decoders is that they typically require large circuit area to be implemented. One type of ECCs that has been explored for memory protection is Difference Set (DS) codes. In this research note, an optimized parallel decoding scheme for DS codes is presented and evaluated. The results show that the circuit area and the decoding delay are reduced compared to a traditional implementation. In addition, the new scheme enables a reduction in the number of parity check bits thus reducing the memory size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号