首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The construction of finite-length irregular LDPC codes with low error floors is currently an attractive research problem. In particular, for the binary erasure channel (BEC), the problem is to find the elements of selected irregular LDPC code ensembles with the size of their minimum stopping set being maximized. Due to the lack of analytical solutions to this problem, a simple but powerful heuristic design algorithm, the approximate cycle extrinsic message degree (ACE) constrained design algorithm, has recently been proposed. Building upon the ACE metric associated with a cycle in a code graph, we introduce the ACE spectrum of LDPC codes as a useful tool for evaluation of codes from selected irregular LDPC code ensembles. Using the ACE spectrum, we generalize the ACE constrained design algorithm, making it more flexible and efficient. We justify the ACE spectrum approach through examples and simulation results.  相似文献   

2.
Structured IRA Codes: Performance Analysis and Construction   总被引:2,自引:0,他引:2  
In this letter, we present design techniques for structured irregular repeat-accumulate (S-IRA) codes with low error-rate floors. These S-IRA codes need not be quasi-cyclic, permitting flexibility in code dimension, length, and rate. We present a simple ensemble estimate of the level of the error-rate floor of finite-length IRA codes on the additive white Gaussian noise channel. This performance estimate provides guidance on the choice of IRA code column weights which yield low floors. We also present two design algorithms for S-IRA codes accompanied by software- and hardware-based performance results which demonstrate their low floors. Lastly, we present two design algorithms for multirate S-IRA code families implementable by a single encoder/decoder  相似文献   

3.
An efficient method for analyzing the performance of finite-length low-density parity-check (LDPC) codes in the waterfall region, when transmission takes place on a memoryless binary-input output-symmetric channel is proposed. This method is based on studying the variations of the channel quality around its expected value when observed during the transmission of a finite-length codeword. We model these variations with a single parameter. This parameter is then viewed as a random variable and its probability distribution function is obtained. Assuming that a decoding failure is the result of an observed channel worse than the code?s decoding threshold, the block error probability of finite-length LDPC codes under different decoding algorithms is estimated. Using an extrinsic information transfer chart analysis, the bit error probability is obtained from the block error probability. Different parameters can be used for modeling the channel variations. In this work, two of such parameters are studied. Through examples, it is shown that this method can closely predict the performance of LDPC codes of a few thousand bits or longer in the waterfall region.  相似文献   

4.
We study the throughput of hybrid automatic retransmission request (H-ARQ) schemes based on incremental redundancy (IR) over a block-fading channel. We provide an information-theoretic analysis assuming binary random coding and typical-set decoding. Then, we study the performance of low-density parity-check (LDPC) code ensembles with iterative belief-propagation decoding, and show that, under the hypothesis of infinite-length codes, LDPCs yield almost optimal performance. Unfortunately, standard finite-length LDPC ensembles incur a considerable performance loss with respect to their infinite-length counterpart, because of their poor frame-error rate (FER) performance. In order to recover part of this loss, we propose two simple yet effective methods: using a modified LDPC ensemble designed to improve the FER; and using an outer selective-repeat protocol acting on smaller packets of information bits. Surprisingly, these apparently very different methods yield almost the same performance gain and recover a considerable fraction of the optimal throughput, thus making practical finite-length LDPC codes very attractive for data wireless communications based on IR H-ARQ schemes.  相似文献   

5.
For LDPC-like codes such as LDPC, GLDPC, and DGLDPC codes, it is well known that the error floor can be caused by the codewords of small weights or stopping sets of small sizes. In this paper, we investigate the computation of asymptotic weight enumerators such that it becomes a convenient tool to determine a good distribution of code ensembles. In addition, by analyzing the first order approximation, we derive a condition to obtain a negative asymptotic growth rate of the codewords of small linear-sized weights, which is an important constraint for distribution optimization. Also the weight enumerators of turbo and repeat-accumulate codes are investigated. Furthermore, we extend our results to nonbinary DGLDPC codes. Generalization to N-layer and convolutional code based LDPC-like codes are also developed.  相似文献   

6.
The complexity of brute-force encoding of low-density parity-check (LDPC) codes is proportional to the square value of the block length. Richardson and Urbanke have proposed efficient encoding algorithms for LDPC codes. These algorithms permute the parity-check matrix of the code iteratively, such that it becomes approximately lower triangular. We propose a new approach for efficient encoding of LDPC codes in which we modify the code ensemble to force an approximate lower triangular structure, thus eliminating the need to apply the algorithms of Richardson and Urbanke in this ensemble. We prove that the new ensemble has the same asymptotic threshold as the corresponding standard ensemble. The new ensemble can be used for linear time encoding of an arbitrary code profile. Computer simulations confirm that the performances of the standard and new ensembles are also very similar when using finite length codes  相似文献   

7.
In this work, we give good concatenated code ensembles for the binary erasure channel (BEC). In particular, we consider repeat multiple-accumulate (RMA) code ensembles formed by the serial concatenation of a repetition code with multiple accumulators, and the hybrid concatenated code (HCC) ensembles recently introduced by Koller et al. (5th Int. Symp. on Turbo Codes & Rel. Topics, Lausanne, Switzerland) consisting of an outer multiple parallel concatenated code serially concatenated with an inner accumulator. We introduce stopping sets for iterative constituent code oriented decoding using maximum a posteriori erasure correction in the constituent codes. We then analyze the asymptotic stopping set distribution for RMA and HCC ensembles and show that their stopping distance hmin, defined as the size of the smallest nonempty stopping set, asymptotically grows linearly with the block length. Thus, these code ensembles are good for the BEC. It is shown that for RMA code ensembles, contrary to the asymptotic minimum distance dmin, whose growth rate coefficient increases with the number of accumulate codes, the hmin growth rate coefficient diminishes with the number of accumulators. We also consider random puncturing of RMA code ensembles and show that for sufficiently high code rates, the asymptotic hmin does not grow linearly with the block length, contrary to the asymptotic dmin, whose growth rate coefficient approaches the Gilbert-Varshamov bound as the rate increases. Finally, we give iterative decoding thresholds for the different code ensembles to compare the convergence properties.  相似文献   

8.
Efficient encoding of low-density parity-check codes   总被引:29,自引:0,他引:29  
Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. We consider the encoding problem for LDPC codes. More generally we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. For the (3,6)-regular LDPC code, for example, the complexity of encoding is essentially quadratic in the block length. However, we show that the associated coefficient can be made quite small, so that encoding codes even of length n≃100000 is still quite practical. More importantly, we show that “optimized” codes actually admit linear time encoding  相似文献   

9.
In this paper, we propose a scheme to construct low-density parity-check (LDPC) codes that are suitable for unequal error protection (UEP). We derive density evolution (DE) formulas for the proposed unequal error protecting LDPC ensembles over the binary erasure channel (BEC). Using the DE formulas, we optimize the codes. For the finite-length cases, we compare our codes with some other LDPC codes, the time-sharing method, and a previous work on UEP using LDPC codes. Simulation results indicate the superiority of the proposed design methodology for UEP  相似文献   

10.
This paper presents a reduced-complexity approximate density evolution (DE) scheme for low-density parity-check (LDPC) codes in channels with memory in the form of a hidden Markov chain. This approximation is used to design degree sequences representing some of the best known LDPC code ensembles for the Gilbert-Elliott channel, and example optimizations are also given for other Markov channels. The problem of approximating the channel estimation is addressed by obtaining a specially constructed message-passing schedule in which the channel messages all approach their stable densities. It is shown that this new schedule is much easier to approximate than the standard schedule, but has the same ultimate performance in the limits of long block length and many decoding iterations. This result is extended to show that all message-passing schedules that satisfy mild conditions will have the same threshold under density evolution  相似文献   

11.
Constraining LDPC degree distributions for improved error floor performance   总被引:1,自引:0,他引:1  
The error floor performance of finite-length irregular low-density parity-check (LDPC) codes can be very poor if code degree distributions are chosen to optimize the threshold performance. In this paper we show that by constraining the optimization process, a balance between threshold and error floor' performance can be obtained. The resulting degree distributions give the best threshold performance subject to some minimum requirement on the error floor.  相似文献   

12.
A new module structure for convolutional codes is introduced and used to establish further links with quasi-cyclic and cyclic codes. The set of finite weight codewords of an (n,k) convolutional code over Fq is shown to be isomorphic to an Fq[x]-submodule of Fq n[x], where Fq n[x] is the ring of polynomials in indeterminate x over Fq n, an extension field of Fq. Such a module can then be associated with a quasi-cyclic code of index n and block length nL viewed as an Fq[x]-submodule of Fq n[x]/langxL-1rang, for any positive integer L. Using this new module approach algebraic lower bounds on the free distance of a convolutional code are derived which can be read directly from the choice of polynomial generators. Links between convolutional codes and cyclic codes over the field extension Fq n are also developed and Bose-Chaudhuri-Hocquenghem (BCH)-type results are easily established in this setting. Techniques to find the optimal choice of the parameter L are outlined  相似文献   

13.
This paper introduces ensembles of systematic accumulate-repeat-accumulate (ARA) codes which asymptotically achieve capacity on the binary erasure channel (BEC) with bounded complexity, per information bit, of encoding and decoding. It also introduces symmetry properties which play a central role in the construction of new capacity-achieving ensembles for the BEC. The results here improve on the tradeoff between performance and complexity provided by previous constructions of capacity-achieving code ensembles defined on graphs. The superiority of ARA codes with moderate to large block length is exemplified by computer simulations which compare their performance with those of previously reported capacity-achieving ensembles of low-density parity-check (LDPC) and irregular repeat-accumulate (IRA) codes. ARA codes also have the advantage of being systematic.  相似文献   

14.
Codes on sparse graphs have been shown to achieve remarkable performance in point-to-point channels with low decoding complexity. Most of the results in this area are based on experimental evidence and/or approximate analysis. The question of whether codes on sparse graphs can achieve the capacity of noisy channels with iterative decoding is still open, and has only been conclusively and positively answered for the binary erasure channel. On the other hand, codes on sparse graphs have been proven to achieve the capacity of memoryless, binary-input, output-symmetric channels with finite graphical complexity per information bit when maximum likelihood (ML) decoding is performed. In this paper, we consider transmission over finite-state channels (FSCs). We derive upper bounds on the average error probability of code ensembles with ML decoding. Based on these bounds we show that codes on sparse graphs can achieve the symmetric information rate (SIR) of FSCs, which is the maximum achievable rate with independently and uniformly distributed input sequences. In order to achieve rates beyond the SIR, we consider a simple quantization scheme that when applied to ensembles of codes on sparse graphs induces a Markov distribution on the transmitted sequence. By deriving average error probability bounds for these quantized code ensembles, we prove that they can achieve the information rates corresponding to the induced Markov distribution, and thus approach the FSC capacity.  相似文献   

15.
We determine the actual parameters for a class of one-point codes of length 64 and 65 over F/sub 8/ defined by Hansen and Stichtenoth (1990). Several codes have a minimum distance that exceeds the Feng-Rao bound. The codes with parameters [64,5,51], [64,10,42], [64,11,42], [64,12,40], [65,5,52], [65,10,43], [65,11,42], [65,12,41], and [65,13,40] are better than any known code.  相似文献   

16.
In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area  相似文献   

17.
A [55,16,19] binary Goppa code is used to construct [57,17,17], [58,17,18], [59,17,19], and [60,17,20] codes. The first two codes have smaller redundancy than previously known codes (linear or nonlinear) of the same length and minimum distance. The last two codes have parameters previously attained only by nonlinear codes  相似文献   

18.
Design methods for irregular repeat-accumulate codes   总被引:3,自引:0,他引:3  
We optimize the random-like ensemble of irregular repeat-accumulate (IRA) codes for binary-input symmetric channels in the large block-length limit. Our optimization technique is based on approximating the evolution of the densities (DE) of the messages exchanged by the belief-propagation (BP) message-passing decoder by a one-dimensional dynamical system. In this way, the code ensemble optimization can be solved by linear programming. We propose four such DE approximation methods, and compare the performance of the obtained code ensembles over the binary-symmetric channel (BSC) and the binary-antipodal input additive white Gaussian noise channel (BIAWGNC). Our results clearly identify the best among the proposed methods and show that the IRA codes obtained by these methods are competitive with respect to the best known irregular low-density parity-check (LDPC) codes. In view of this and the very simple encoding structure of IRA codes, they emerge as attractive design choices.  相似文献   

19.
We introduce the fixed-rate bit stuff (FRB) algorithm for efficiently encoding and decoding maximum-runlength-limited (MRL) sequences. Our approach is based on a simple, variable-rate technique called bit stuffing. Bit stuffing produces near-capacity achieving codes for a wide range of constraints, but encoding is variable-rate, which is unacceptable in most applications. In this work, we design near-capacity fixed-rate codes using a three-step procedure. The fixed-length input data block first undergoes iterative preprocessing, followed by variable-rate bit stuffing, and finally dummy-bit padding to a fixed output length. The iterative preprocessing is key to achieving high encoding rates. We discuss rate computation for the proposed FRB algorithm and show that the asymptotic (in input block length) encoding rate is close to the average rate of the variable-rate bit stuff code. Then, we proceed to explore the effect of decreasing/increasing the number of preprocessing iterations. Finally, we derive a lower bound on the encoding rate with finite-length input blocks and tabulate the parameters required to design FRB codes with rate close to 100/101 and 200/201.  相似文献   

20.
We study the average error probability performance of binary linear code ensembles when each codeword is divided into J subcodewords with each being transmitted over one of J parallel channels. This model is widely accepted for a number of important practical channels and signaling schemes including block-fading channels, incremental redundancy retransmission schemes, and multicarrier communication techniques for frequency-selective channels. Our focus is on ensembles of good codes whose performance in a single channel model is characterized by a threshold behavior, e.g., turbo and low-density parity-check (LDPC) codes. For a given good code ensemble, we investigate reliable channel regions which ensure reliable communications over parallel channels under maximum-likelihood (ML) decoding. To construct reliable regions, we study a modifed 1961 Gallager bound for parallel channels. By allowing codeword bits to be randomly assigned to each component channel, the average parallel-channel Gallager bound is simplified to be a function of code weight enumerators and channel assignment rates. Special cases of this bound, average union-Bhattacharyya (UB), Shulman-Feder (SF), simplified-sphere (SS), and modified Shulman-Feder (MSF) parallel-channel bounds, allow for describing reliable channel regions using simple functions of channel and code spectrum parameters. Parameters describing the channel are the average parallel-channel Bhattacharyya noise parameter, the average channel mutual information, and parallel Gaussian channel signal-to-noise ratios (SNRs). Code parameters include the union-Bhattacharyya noise threshold and the weight spectrum distance to the random binary code ensemble. Reliable channel regions of repeat-accumulate (RA) codes for parallel binary erasure channels (BECs) and of turbo codes for parallel additive white Gaussian noise (AWGN) channels are numerically computed and compared with simulation results based on iterative decoding. In addition, an examp  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号