首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we introduce stopping sets for iterative row-column decoding of product codes using optimal constituent decoders. When transmitting over the binary erasure channel (BEC), iterative row-column decoding of product codes using optimal constituent decoders will either be successful, or stop in the unique maximum-size stopping set that is contained in the (initial) set of erased positions. Let Cp denote the product code of two binary linear codes Cc and Cr of minimum distances dc and dr and second generalized Hamming weights d2(Cc) and d2(Cr), respectively. We show that the size smin of the smallest noncode- word stopping set is at least mm(drd2(Cc),dcd2(Cr)) > drdc, where the inequality follows from the Griesmer bound. If there are no codewords in Cp with support set S, where S is a stopping set, then S is said to be a noncodeword stopping set. An immediate consequence is that the erasure probability after iterative row-column decoding using optimal constituent decoders of (finite-length) product codes on the BEC, approaches the erasure probability after maximum-likelihood decoding as the channel erasure probability decreases. We also give an explicit formula for the number of noncodeword stopping sets of size smin, which depends only on the first nonzero coefficient of the constituent (row and column) first and second support weight enumerators, for the case when d2(Cr) < 2dr and d2(Cc) < 2dc. Finally, as an example, we apply the derived results to the product of two (extended) Hamming codes and two Golay codes.  相似文献   

2.
Weight Distribution of Low-Density Parity-Check Codes   总被引:1,自引:0,他引:1  
We derive the average weight distribution function and its asymptotic growth rate for low-density parity-check (LDPC) code ensembles. We show that the growth rate of the minimum distance of LDPC codes depends only on the degree distribution pair. It turns out that capacity-achieving sequences of standard (unstructured) LDPC codes under iterative decoding over the binary erasure channel (BEC) known to date have sublinearly growing minimum distance in the block length  相似文献   

3.
We show how asymptotic estimates of powers of polynomials with nonnegative coefficients can be used in the analysis of low-density parity-check (LDPC) codes. In particular, we show how these estimates can be used to derive the asymptotic distance spectrum of both regular and irregular LDPC code ensembles. We then consider the binary erasure channel (BEC). Using these estimates we derive lower bounds on the error exponent, under iterative decoding, of LDPC codes used over the BEC. Both regular and irregular code structures are considered. These bounds are compared to the corresponding bounds when optimal (maximum-likelihood (ML)) decoding is applied.  相似文献   

4.
This correspondence studies the performance of the iterative decoding of low-density parity-check (LDPC) code ensembles that have linear typical minimum distance and stopping set size. We first obtain a lower bound on the achievable rates of these ensembles over memoryless binary-input output-symmetric channels. We improve this bound for the binary erasure channel. We also introduce a method to construct the codes meeting the lower bound for the binary erasure channel. Then, we give upper bounds on the rate of LDPC codes with linear minimum distance when their right degree distribution is fixed. We compare these bounds to the previously derived upper bounds on the rate when there is no restriction on the code ensemble.  相似文献   

5.
This paper is devoted to the finite-length analysis of turbo decoding over the binary erasure channel (BEC). The performance of iterative belief-propagation decoding of low-density parity-check (LDPC) codes over the BEC can be characterized in terms of stopping sets. We describe turbo decoding on the BEC which is simpler than turbo decoding on other channels. We then adapt the concept of stopping sets to turbo decoding and state an exact condition for decoding failure. Apply turbo decoding until the transmitted codeword has been recovered, or the decoder fails to progress further. Then the set of erased positions that will remain when the decoder stops is equal to the unique maximum-size turbo stopping set which is also a subset of the set of erased positions. Furthermore, we present some improvements of the basic turbo decoding algorithm on the BEC. The proposed improved turbo decoding algorithm has substantially better error performance as illustrated by the given simulation results. Finally, we give an expression for the turbo stopping set size enumerating function under the uniform interleaver assumption, and an efficient enumeration algorithm of small-size turbo stopping sets for a particular interleaver. The solution is based on the algorithm proposed by Garello et al. in 2001 to compute an exhaustive list of all low-weight codewords in a turbo code.  相似文献   

6.
In this letter, we show how to compute the asymptotic growth rate of input-output weight enumerator (AGR-IOWE) for some accumulate-based codes by using the sharp tools already developed. Numerical results on the AGR-IOWE for irregular repeat-accumulate (IRA) codes, systematic regular RA (SRA) codes, and concatenated zigzag codes are reported. It is observed that the SRA code has the same AGR-IOWE as a comparable concatenated zigzag code. For both SRA and concatenated zigzag codes, if keeping the code rate fixed, the increase of the grouping factor for the component punctured accumulate code may result in better asymptotic performance under maximum-likelihood decoding, but often worse performance under iterative sum-product decoding.  相似文献   

7.
Thanks to the probabilistic message passing performed between its component decoders, a turbo decoder is able to provide strong error correction close to the theoretical limit. However, the minimum Hamming distance (dmin) of a turbo code may not be sufficiently large to ensure large asymptotic gains at very low error rates (the so-called flattening effect). Increasing the dmin of a turbo code may involve using component encoders with a large number of states, devising more sophisticated internal permutations, or increasing the number of component encoders. This paper addresses the latter option and proposes a modified turbo code in which a fraction of the parity bits are encoded by a rate-1, third encoder. The result is a noticeably increased dmin, which improves turbo decoder performance at low error rates. Performance comparisons with turbo codes and serially concatenated convolutional codes are given.  相似文献   

8.
We derive upper and lower bounds on the encoding and decoding complexity of two capacity-achieving ensembles of irregular repeat-accumulate (IRA1 and IRA2) codes on the binary erasure channel (BEC). These bounds are expressed in terms of the gap between the channel capacity and the rate of a typical code from this ensemble for which reliable communications is achievable under message-passing iterative (MPI) decoding. The complexity of the ensemble of IRA1 codes grows like the negative logarithm of the gap to capacity. On the other hand, the complexity of the ensemble of IRA2 codes with any choice of the degree distribution grows at least like the inverse square root of the gap to capacity, and at most like the inverse of the gap to capacity.  相似文献   

9.
Stopping set distribution of LDPC code ensembles   总被引:1,自引:0,他引:1  
Stopping sets determine the performance of low-density parity-check (LDPC) codes under iterative decoding over erasure channels. We derive several results on the asymptotic behavior of stopping sets in Tanner-graph ensembles, including the following. An expression for the normalized average stopping set distribution, yielding, in particular, a critical fraction of the block length above which codes have exponentially many stopping sets of that size. A relation between the degree distribution and the likely size of the smallest nonempty stopping set, showing that for a /spl radic/1-/spl lambda/'(0)/spl rho/'(1) fraction of codes with /spl lambda/'(0)/spl rho/'(1)<1, and in particular for almost all codes with smallest variable degree >2, the smallest nonempty stopping set is linear in the block length. Bounds on the average block error probability as a function of the erasure probability /spl epsi/, showing in particular that for codes with lowest variable degree 2, if /spl epsi/ is below a certain threshold, the asymptotic average block error probability is 1-/spl radic/1-/spl lambda/'(0)/spl rho/'(1)/spl epsi/.  相似文献   

10.
The design of low-density parity-check (LDPC) codes under hybrid iterative / maximum likelihood decoding is addressed for the binary erasure channel (BEC). Specifically, we focus on generalized irregular repeat-accumulate (GeIRA) codes, which offer both efficient encoding and design flexibility. We show that properly designed GeIRA codes tightly approach the performance of an ideal maximum distance separable (MDS) code, even for short block sizes. For example, our (2048,1024) code reaches a codeword error rate of 10-5 at channel erasure probability isin= 0.450, where an ideal (2048,1024) MDS code would reach the same error rate at isin = 0.453.  相似文献   

11.
This paper introduces ensembles of systematic accumulate-repeat-accumulate (ARA) codes which asymptotically achieve capacity on the binary erasure channel (BEC) with bounded complexity, per information bit, of encoding and decoding. It also introduces symmetry properties which play a central role in the construction of new capacity-achieving ensembles for the BEC. The results here improve on the tradeoff between performance and complexity provided by previous constructions of capacity-achieving code ensembles defined on graphs. The superiority of ARA codes with moderate to large block length is exemplified by computer simulations which compare their performance with those of previously reported capacity-achieving ensembles of low-density parity-check (LDPC) and irregular repeat-accumulate (IRA) codes. ARA codes also have the advantage of being systematic.  相似文献   

12.
LT码和q-LDPC码级联方案在深空通信中的应用   总被引:2,自引:0,他引:2  
该文针对深空通信对长纠删码的需求,提出了LT (Luby Transform)码和q-LDPC码的级联方案。在综合考虑性能和复杂度的情况下,选取8-LDPC码和8PSK的级联作为等效的删除信道,长度选择灵活、编译码简单的LT码实现纠删功能。文中设计了两种短8-LDPC码,并对整个级联系统的纠错性能进行了仿真。仿真结果表明8-LDPC码的性能优于信源信息速率和码率相同的二进制LDPC码,级联系统在等效包删除概率不超过0.1时,系统误比特率以概率1趋于0。  相似文献   

13.
This paper investigates decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC). We study the iterative and maximum-likelihood (ML) decoding of LDPC codes on this channel. We derive bounds on the ML decoding of LDPC codes on the BEC. We then present an improved decoding algorithm. The proposed algorithm has almost the same complexity as the standard iterative decoding. However, it has better performance. Simulations show that we can decrease the error rate by several orders of magnitude using the proposed algorithm. We also provide some graph-theoretic properties of different decoding algorithms of LDPC codes over the BEC which we think are useful to better understand the LDPC decoding methods, in particular, for finite-length codes.  相似文献   

14.
Mutual information transfer characteristics of soft in/soft out decoders are proposed as a tool to better understand the convergence behavior of iterative decoding schemes. The exchange of extrinsic information is visualized as a decoding trajectory in the extrinsic information transfer chart (EXIT chart). This allows the prediction of turbo cliff position and bit error rate after an arbitrary number of iterations. The influence of code memory, code polynomials as well as different constituent codes on the convergence behavior is studied for parallel concatenated codes. A code search based on the EXIT chart technique has been performed yielding new recursive systematic convolutional constituent codes exhibiting turbo cliffs at lower signal-to-noise ratios than attainable by previously known constituent codes  相似文献   

15.
A generalized low-density parity check code (GLDPC) is a low-density parity check code in which the constraint nodes of the code graph are block codes, rather than single parity checks. In this paper, we study GLDPC codes which have BCH or Reed-Solomon codes as subcodes under bounded distance decoding (BDD). The performance of the proposed scheme is investigated in the limit case of an infinite length (cycle free) code used over a binary erasure channel (BEC) and the corresponding thresholds for iterative decoding are derived. The performance of the proposed scheme for finite code lengths over a BEC is investigated as well. Structures responsible for decoding failures are defined and a theoretical analysis over the ensemble of GLDPC codes which yields exact bit and block error rates of the ensemble average is derived. Unfortunately this study shows that GLDPC codes do not compare favorably with their LDPC counterpart over the BEC. Fortunately, it is also shown that under certain conditions, objects identified in the analysis of GLDPC codes over a BEC and the corresponding theoretical results remain useful to derive tight lower bounds on the performance of GLDPC codes over a binary symmetric channel (BSC). Simulation results show that the proposed method yields competitive performance with a good decoding complexity trade-off for the BSC.  相似文献   

16.
In this correspondence, we first investigate some analytical aspects of the recently proposed improved decoding algorithm for low-density parity-check (LDPC) codes over the binary erasure channel (BEC). We derive a necessary and sufficient condition for the improved decoding algorithm to successfully complete decoding when the decoder is initialized to guess a predetermined number of guesses after the standard message-passing terminates at a stopping set. Furthermore, we present improved bounds on the number of bits to be guessed for successful completion of the decoding process when a stopping set is encountered. Under suitable conditions, we derive a lower bound on the number of iterations to be performed for complete decoding of the stopping set. We then present a superior, novel improved decoding algorithm for LDPC codes over the binary erasure channel (BEC). The proposed algorithm combines the observation that a considerable fraction of unsatisfied check nodes in the neighborhood of a stopping set are of degree two, and the concept of guessing bits to perform simple and intuitive graph-theoretic manipulations on the Tanner graph. The proposed decoding algorithm has a complexity similar to previous improved decoding algorithms. Finally, we present simulation results of short-length codes over BEC that demonstrate the superiority of our algorithm over previous improved decoding algorithms for a wide range of bit error rates  相似文献   

17.
A parallel concatenated convolutional coding scheme consists of two constituent systematic: convolutional encoders linked by an interleaver. The information bits at the input of the first encoder are scrambled by the interleaver before entering the second encoder. The codewords of the parallel concatenated code consist of the information bits followed by the parity check bits of both encoders. Parallel concatenated codes (turbo codes), decoded through an iterative decoding algorithm of relatively low complexity, have been shown to yield remarkable coding gains close to theoretical limits. We characterize the separate contributions that the interleaver length and constituent codes give to the overall performance of the parallel concatenated code, and present some guidelines for the optimal design of the constituent convolutional codes  相似文献   

18.
This paper investigates the joint iterative decoding of low-density parity-check (LDPC) codes and channels with memory. Sequences of irregular LDPC codes are presented that achieve, under joint iterative decoding, the symmetric information rate of a class of channels with memory and erasure noise. This gives proof, for the first time, that joint iterative decoding can be information rate lossless with respect to maximum-likelihood decoding. These results build on previous capacity-achieving code constructions for the binary erasure channel. A two state intersymbol-interference channel with erasure noise, known as the dicode erasure channel, is used as a concrete example throughout the paper.  相似文献   

19.
The construction of finite-length irregular LDPC codes with low error floors is currently an attractive research problem. In particular, for the binary erasure channel (BEC), the problem is to find the elements of selected irregular LDPC code ensembles with the size of their minimum stopping set being maximized. Due to the lack of analytical solutions to this problem, a simple but powerful heuristic design algorithm, the approximate cycle extrinsic message degree (ACE) constrained design algorithm, has recently been proposed. Building upon the ACE metric associated with a cycle in a code graph, we introduce the ACE spectrum of LDPC codes as a useful tool for evaluation of codes from selected irregular LDPC code ensembles. Using the ACE spectrum, we generalize the ACE constrained design algorithm, making it more flexible and efficient. We justify the ACE spectrum approach through examples and simulation results.  相似文献   

20.
We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the tradeoff between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via LovÁsz's Local Lemma (LLL) and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum-likelihood (ML) decoding.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号