首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Near-optimum decoding of product codes: block turbo codes   总被引:2,自引:0,他引:2  
This paper describes an iterative decoding algorithm for any product code built using linear block codes. It is based on soft-input/soft-output decoders for decoding the component codes so that near-optimum performance is obtained at each iteration. This soft-input/soft-output decoder is a Chase decoder which delivers soft outputs instead of binary decisions. The soft output of the decoder is an estimation of the log-likelihood ratio (LLR) of the binary decisions given by the Chase decoder. The theoretical justifications of this algorithm are developed and the method used for computing the soft output is fully described. The iterative decoding of product codes is also known as the block turbo code (BTC) because the concept is quite similar to turbo codes based on iterative decoding of concatenated recursive convolutional codes. The performance of different Bose-Chaudhuri-Hocquenghem (BCH)-BTCs are given for the Gaussian and the Rayleigh channel. Performance on the Gaussian channel indicates that data transmission at 0.8 dB of Shannon's limit or more than 98% (R/C>0.98) of channel capacity can be achieved with high-code-rate BTC using only four iterations. For the Rayleigh channel, the slope of the bit-error rate (BER) curve is as steep as for the Gaussian channel without using channel state information  相似文献   

2.
Soft decision decoding of binary linear block codes transmitted over the additive white Gaussian channel (AWGN) using antipodal signaling is considered. A set of decoding algorithms called generalized Chase algorithms is proposed. In contrast to Chase algorithms, which require alfloor (d- 1)/2 rfloorbinary error-correcting decoder for decoding a binary linear block code of minimum distanced, the generalized Chase algorithms can use a binary decoder that can correct less thanlfloor ( d - 1)/2 rfloorhard errors. The Chase algorithms are particular cases of the generalized Chase algorithms. The performance of all proposed algorithms is asymptotically optimum for high signal-to-noise ratio (SNR). Simulation results for the(47, 23)quadratic residue code indicate that even for low SNR the performance level of a maximum likelihood decoder can be approached by a relatively simple decoding procedure.  相似文献   

3.
In this paper we investigate a generalization of Gallager's (1963) low-density (LD) parity-check codes, where as component codes single error correcting Hamming codes are used instead of single error detecting parity-check codes. It is proved that there exist such generalized low-density (GLD) codes for which the minimum distance is growing linearly with the block length, and a lower bound of the minimum distance is given. We also study iterative decoding of GLD codes for the communication over an additive white Gaussian noise channel. The performance in terms of the bit error rate, obtained by computer simulations, is presented for GLD codes of different lengths  相似文献   

4.
We consider the decoding problem for low-density parity-check codes, and apply nonlinear programming methods. This extends previous work using linear programming (LP) to decode linear block codes. First, a multistage LP decoder based on the branch-and-bound method is proposed. This decoder makes use of the maximum-likelihood-certificate property of the LP decoder to refine the results when an error is reported. Second, we transform the original LP decoding formulation into a box-constrained quadratic programming form. Efficient linear-time parallel and serial decoding algorithms are proposed and their convergence properties are investigated. Extensive simulation studies are performed to assess the performance of the proposed decoders. It is seen that the proposed multistage LP decoder outperforms the conventional sum-product (SP) decoder considerably for low-density parity-check (LDPC) codes with short to medium block length. The proposed box-constrained quadratic programming decoder has less complexity than the SP decoder and yields much better performance for LDPC codes with regular structure.  相似文献   

5.
Turbo product codes (TPCs) provide an attractive alternative to recursive systematic convolutional (RSC)-based turbo systems. Rather than employ trellis-based decoders, an algebraic decoder may be repeatedly employed in a low-complexity, soft-input/soft-output errors-and-erasures decoder such as the Chase algorithm. Taking motivation from efficient forced erasure decoders, this implementation re-orders the Chase algorithm's repeated decodings such that the inherent computational redundancy is greatly reduced without degrading performance. The result is a highly efficient fast Chase implementation. The algorithm presented here is principally applicable to single error-correcting codes although consideration is also given to the more general case. The new decoder's value in practical turbo schemes is demonstrated via application to decoding of the (64,57,4) extended Hamming TPC  相似文献   

6.
We consider a class of message-passing decoders for low-density parity-check (LDPC) codes whose messages are binary valued. We prove that if the channel is symmetric and all codewords are equally likely to be transmitted, an optimum decoding rule (in the sense of minimizing message error rate) should satisfy certain symmetry and isotropy conditions. Using this result, we prove that Gallager's Algorithm B achieves the optimum decoding threshold among all binary message-passing decoding algorithms for regular codes. For irregular codes, we argue that when the nodes of the message-passing decoder do not exploit knowledge of their decoding neighborhood, optimality of Gallager's Algorithm B is preserved. We also consider the problem of designing irregular LDPC codes and find a bound on the achievable rates with Gallager's Algorithm B. Using this bound, we study the case of low error-rate channels and analytically find good degree distributions for them.  相似文献   

7.
The complexity of algorithms to perform soft decision decoding on block codes has impeded their inclusion in practical systems. A well-known class of algorithms for decoding block codes utilizing channel measurement information along with the algebraic properties of the code are the Chase algorithms.1 In this paper a decoding method similar to Chase's third algorithm is presented. However, in this method, a single test pattern or alternate codeword makes up one stage of the decoder. The method uses information from the previous decoding(s) to assist in generating a test pattern. This single stage ‘Second Chance Algorithm’ can then be extended to a ‘Third Chance Algorithm’ (and beyond) to enhance performance. The method does not invoke the hard decision decoder as often as the Chase algorithms.  相似文献   

8.
In this paper, the construction of new families of error-correcting codes adapted from Boutros and Lentmaier's generalized low-density (GLD) codes, which we called generalized irregular low-density (GILD) codes, is investigated. By introducing irregular base matrices, significant improvements on Boutros and Lentmaier's results are achieved. Two cases are considered; in the first case, the same constituent code is used, and in the second case, different constituent codes are used. It is proved by an ensemble performance argument that these codes exist and are asymptotically good in the sense of the minimum-distance criterion. It is also proved that the performance of GILD codes approaches the binary symmetric channel capacity limit. Iterative decoding of GILD codes for communication over an additive white Gaussian noise channel is also studied. The high flexibility in selecting the parameters of GILD codes and their better performance and higher rate make them more attractive than GLD codes, and hence, suitable for small and medium block lengths forward-error-correcting schemes. Comparisons of simulation results between some GILD codes and some good low-density parity-check codes show very close performances  相似文献   

9.
Efficient hardware implementation of low-density parity-check (LDPC) codes is of great interest since LDPC codes are being considered for a wide range of applications. Recently, overlapped message passing (OMP) decoding has been proposed to improve the throughput and hardware utilization efficiency (HUE) of decoder architectures for LDPC codes. In this paper, we first study the scheduling for the OMP decoding of LDPC codes, and show that maximizing the throughput gain amounts to minimizing the intra- and inter-iteration waiting times. We then focus on the OMP decoding of quasi-cyclic (QC) LDPC codes. We propose a partly parallel OMP decoder architecture and implement it using FPGA. For any QC LDPC code, our OMP decoder achieves the maximum throughput gain and HUE due to overlapping, hence has higher throughput and HUE than previously proposed OMP decoders while maintaining the same hardware requirements. We also show that the maximum throughput gain and HUE achieved by our OMP decoder are ultimately determined by the given code. Thus, we propose a coset-based construction method, which results in QC LDPC codes that allow our optimal OMP decoder to achieve higher throughput and HUE.  相似文献   

10.
Turbo乘积码是一种性能卓越的前向纠错码,具有译码复杂度低,且在低信噪比时可以获得近似最优的性能。介绍基于Chase算法的Turbo乘积码软入软出(SISO)迭代译码算法,提出基于VHDL硬件描述语言的TPC译码器设计方案,并在FPGA芯片上进行了仿真和验证。仿真结果证明该译码器有很大的实用性和灵活性。  相似文献   

11.
Layered approximately regular (LAR) low-density parity-check (LDPC) codes are proposed, with which one single pair of encoder and decoder support various code lengths and code rates. The parity check matrices of LAR-LDPC codes have a "layer-block-cell" structure with some additional constraints. An encoder architecture is then designed for LAR-LDPC codes, by making two improvements to the Richardson-Urbanke approach: the forward substitution operation is entirely removed and the dense-matrix-vector multiplication is handled using feedback shift-registers. A partially parallel decoder architecture is also designed for LAR-LDPC codes, where a layered modified min-sum decoding algorithm is used to trade off among complexity, speed, and performance. More importantly, the interconnection network, which is inevitable for partially parallel decoders, has much lower hardware complexity compared with that for general LDPC codes. Both the encoder and decoder architectures are highly flexible in code length and code rate.  相似文献   

12.
This paper discusses the Slepian-Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple "source-splitting" strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian-Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the "min-sum" iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable "expander"-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance  相似文献   

13.
We show that for low-density parity-check (LDPC) codes whose Tanner graphs have sufficient expansion, the linear programming (LP) decoder of Feldman, Karger, and Wainwright can correct a constant fraction of errors. A random graph will have sufficient expansion with high probability, and recent work shows that such graphs can be constructed efficiently. A key element of our method is the use of a dual witness: a zero-valued dual solution to the decoding linear program whose existence proves decoding success. We show that as long as no more than a certain constant fraction of the bits are flipped by the channel, we can find a dual witness. This new method can be used for proving bounds on the performance of any LP decoder, even in a probabilistic setting. Our result implies that the word error rate of the LP decoder decreases exponentially in the code length under the binary-symmetric channel (BSC). This is the first such error bound for LDPC codes using an analysis based on "pseudocodewords." Recent work by Koetter and Vontobel shows that LP decoding and min-sum decoding of LDPC codes are closely related by the "graph cover" structure of their pseudocodewords; in their terminology, our result implies that that there exist families of LDPC codes where the minimum BSC pseudoweight grows linearly in the block length  相似文献   

14.
In this letter, we propose an efficient decoding algorithm for turbo product codes as introduced by Pyndiah. The proposed decoder has no performance degradation and reduces the complexity of the original decoder by an order of magnitude. We concentrate on extended Bose-Chaudhuri-Hocquengem codes as the constituent row and column codes because of their already low implementation complexity. For these component codes, we observe that the weight and reliability factors can be fixed, and that there is no need for normalization. Furthermore, as opposed to previous efficient decoders, the newly proposed decoder naturally scales with a test-pattern parameter p that can change as a function of iteration number, i.e., the efficient Chase algorithm presented here uses conventionally ordered test patterns, and the syndromes, even parities, and extrinsic metrics are obtained with a minimum number of operations.  相似文献   

15.
Distance based adaptive scaling in suboptimal iterative decoding   总被引:1,自引:0,他引:1  
This article develops an alternative adaptive iterative Chase (1972) based decoding algorithm for block turbo/product codes. The decoder considers only a small subset of codewords, so that estimates of the extrinsic information are required in some cases. This article develops such an estimate based on code distance properties  相似文献   

16.
In this letter, a stopping criterion using the error- detecting capability of linear block codes is proposed for the decoding of turbo product codes. The iterative decoding is stopped when the outputs from the Chase decoder are valid codewords for all rows and columns simultaneously. Simulation shows that the proposed method can reduce about one and half iterations compared with an existing stopping method, without noticeable BER performance loss. Some modification has also been discussed which may further reduce the decoding complexity.  相似文献   

17.
Decoding Algorithms for Nonbinary LDPC Codes Over GF(q)   总被引:1,自引:0,他引:1  
  相似文献   

18.
A free Z4 code C may be decoded by decoding its canonical image C macr over Z2 twice in succession. Hence, a Chase decoder for C could employ as its hard-decision (HD) decoder, a two-stage decoder which performs HD decoding on C macr in each stage. Alternatively, one could have a two-stage soft-decision decoder by employing a Chase decoder for C macr in each stage. We demonstrate that the latter approach can offer a significant reduction in complexity over the other, with little or no price to pay in terms of word error rate performance, particularly at low to moderate code rates.  相似文献   

19.
This paper considers a class of iterative message-passing decoders for low-density parity-check codes in which the decoder can choose its decoding rule from a set of decoding algorithms at each iteration. Each available decoding algorithm may have different per-iteration computation time and performance. With an appropriate choice of algorithm at each iteration, overall decoding latency can be reduced significantly, compared with standard decoding methods. Such a decoder is called a gear-shift decoder because it changes its decoding rule (shifts gears) in order to guarantee both convergence and maximum decoding speed (minimum decoding latency). Using extrinsic information transfer charts, the problem of finding the optimum (minimum decoding latency) gear-shift decoder is formulated as a computationally tractable dynamic program. The optimum gear-shift decoder is proved to have a decoding threshold equal to or better than the best decoding threshold among those of the available algorithms. In addition to speeding up software decoder implementations, gear-shift decoding can be applied to optimize a pipelined hardware decoder, minimizing hardware cost for a given decoder throughput.  相似文献   

20.
Limited-trial Chase decoding   总被引:1,自引:0,他引:1  
Chase decoders permit flexible use of reliability information in algebraic decoding algorithms for error-correcting block codes of Hamming distance d. The least complex version of the original Chase algorithms uses roughly d/2 trials of a conventional binary decoder, after which the best decoding result is selected as the final output. On certain channels, this approach achieves asymptotically the same performance as maximum-likelihood (ML) decoding. In this correspondence, the performance of Chase-like decoders with even less trials is studied. Most strikingly, it turns out that asymptotically optimal performance can be achieved by a version which uses only about d/4 trials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号