首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The decomposition theory of matroids initiated by Paul Seymour in the 1980s has had an enormous impact on research in matroid theory. This theory, when applied to matrices over the binary field, yields a powerful decomposition theory for binary linear codes. In this paper, we give an overview of this code decomposition theory, and discuss some of its implications in the context of the recently discovered formulation of maximum-likelihood (ML) decoding of a binary linear code over a binary-input discrete memoryless channel as a linear programming problem. We translate matroid-theoretic results of Grotschel and Truemper from the combinatorial optimization literature to give examples of nontrivial families of codes for which the ML decoding problem can be solved in time polynomial in the length of the code. One such family is that consisting of codes for which the codeword polytope is identical to the Koetter-Vontobel fundamental polytope derived from the entire dual code Cperp. However, we also show that such families of codes are not good in a coding-theoretic sense-either their dimension or their minimum distance must grow sublinearly with code length. As a consequence, we have that decoding by linear programming, when applied to good codes, cannot avoid failing occasionally due to the presence of pseudocode words.  相似文献   

2.
We consider the decoding problem for low-density parity-check codes, and apply nonlinear programming methods. This extends previous work using linear programming (LP) to decode linear block codes. First, a multistage LP decoder based on the branch-and-bound method is proposed. This decoder makes use of the maximum-likelihood-certificate property of the LP decoder to refine the results when an error is reported. Second, we transform the original LP decoding formulation into a box-constrained quadratic programming form. Efficient linear-time parallel and serial decoding algorithms are proposed and their convergence properties are investigated. Extensive simulation studies are performed to assess the performance of the proposed decoders. It is seen that the proposed multistage LP decoder outperforms the conventional sum-product (SP) decoder considerably for low-density parity-check (LDPC) codes with short to medium block length. The proposed box-constrained quadratic programming decoder has less complexity than the SP decoder and yields much better performance for LDPC codes with regular structure.  相似文献   

3.
The problem of low complexity linear programming (LP) decoding of low-density parity-check (LDPC) codes is considered. An iterative algorithm, similar to min-sum and belief propagation, for efficient approximate solution of this problem was proposed by Vontobel and Koetter. In this paper, the convergence rate and computational complexity of this algorithm are studied using a scheduling scheme that we propose. In particular, we are interested in obtaining a feasible vector in the LP decoding problem that is close to optimal in the following sense. The distance, normalized by the block length, between the minimum and the objective function value of this approximate solution can be made arbitrarily small. It is shown that such a feasible vector can be obtained with a computational complexity which scales linearly with the block length. Combined with previous results that have shown that the LP decoder can correct some fixed fraction of errors we conclude that this error correction can be achieved with linear computational complexity. This is achieved by first applying the iterative LP decoder that decodes the correct transmitted codeword up to an arbitrarily small fraction of erroneous bits, and then correcting the remaining errors using some standard method. These conclusions are also extended to generalized LDPC codes.   相似文献   

4.
Two asymptotic upper bounds on the information rate of a tree code as a function of its feedback decoding minimum distance are presented. These bounds are generalizations of the linear programming bounds for binary block codes proved by McEliece et al., and they are derived from linear programming problems based on Delsarte's theory of association schemes.  相似文献   

5.
Linear Network Error Correction Codes in Packet Networks   总被引:4,自引:0,他引:4  
In this paper, we study basic properties of linear network error correction codes, their construction and error correction capability for various kinds of errors. Our discussion is restricted to the single-source multicast case. We define the minimum distance of a network error correction code. This plays the same role as it does in classical coding theory. We construct codes that can correct errors up to the full error correction capability specified by Singleton bound for network error correction codes recently established by Cai and Yeung. We propose a decoding principle for network error correction codes, based on which we introduce two decoding algorithms and analyze their performance. We formulate the global kernel error correction problem and characterize the error correction capability of codes for this kind of error.  相似文献   

6.
The moderate complexity of low-density parity-check (LDPC) codes under iterative decoding is attributed to the sparseness of their parity-check matrices. It is therefore of interest to consider how sparse parity-check matrices of binary linear block codes can be a function of the gap between their achievable rates and the channel capacity. This issue was addressed by Sason and Urbanke, and it is revisited in this paper. The remarkable performance of LDPC codes under practical and suboptimal decoding algorithms motivates the assessment of the inherent loss in performance which is attributed to the structure of the code or ensemble under maximum-likelihood (ML) decoding, and the additional loss which is imposed by the suboptimality of the decoder. These issues are addressed by obtaining upper bounds on the achievable rates of binary linear block codes, and lower bounds on the asymptotic density of their parity-check matrices as a function of the gap between their achievable rates and the channel capacity; these bounds are valid under ML decoding, and hence, they are valid for any suboptimal decoding algorithm. The new bounds improve on previously reported results by Burshtein and by Sason and Urbanke, and they hold for the case where the transmission takes place over an arbitrary memoryless binary-input output-symmetric (MBIOS) channel. The significance of these information-theoretic bounds is in assessing the tradeoff between the asymptotic performance of LDPC codes and their decoding complexity (per iteration) under message-passing decoding. They are also helpful in studying the potential achievable rates of ensembles of LDPC codes under optimal decoding; by comparing these thresholds with those calculated by the density evolution technique, one obtains a measure for the asymptotic suboptimality of iterative decoding algorithms  相似文献   

7.
Minimum-distance bounds by graph analysis   总被引:3,自引:0,他引:3  
The parity-check matrix of a linear code is used to define a bipartite code constraint (Tanner) graph in which bit nodes are connected to parity-check nodes. The connectivity properties of this graph are analyzed using both local connectivity and the eigenvalues of the associated adjacency matrix. A simple lower bound on the minimum distance of the code is expressed in terms of the two largest eigenvalues. For a more powerful bound, local properties of the subgraph corresponding to a minimum-weight word in the code are used to create an optimization problem whose solution is a lower bound on the code's minimum distance. Linear programming gives one bound. The technique is illustrated by applying it to sparse block codes with parameters [7,3,4] and [42,23,6]  相似文献   

8.
We show that the minimum distance d of a linear code is not approximable to within any constant factor in random polynomial time (RP), unless nondeterministic polynomial time (NP) equals RP. We also show that the minimum distance is not approximable to within an additive error that is linear in the block length n of the code. Under the stronger assumption that NP is not contained in random quasi-polynomial time (RQP), we show that the minimum distance is not approximable to within the factor 2/sup log1-/spl epsi//(n), for any /spl epsi/>0. Our results hold for codes over any finite field, including binary codes. In the process, we show that it is hard to find approximately nearest codewords even if the number of errors exceeds the unique decoding radius d/2 by only an arbitrarily small fraction /spl epsi/d. We also prove the hardness of the nearest codeword problem for asymptotically good codes, provided the number of errors exceeds (2/3)d. Our results for the minimum distance problem strengthen (though using stronger assumptions) a previous result of Vardy (1997) who showed that the minimum distance cannot be computed exactly in deterministic polynomial time (P), unless P = NP. Our results are obtained by adapting proofs of analogous results for integer lattices due to Ajtai (1998) and Micciancio (see SIAM J. Computing, vol.30, no.6, p.2008-2035, 2001). A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes.  相似文献   

9.
Capacity-approaching protograph codes   总被引:1,自引:0,他引:1  
This paper discusses construction of protographbased low-density parity-check (LDPC) codes. Emphasis is placed on protograph ensembles whose typical minimum distance grows linearly with block size. Asymptotic performance analysis for both weight enumeration and iterative decoding threshold determination is provided and applied to a series of code constructions. Construction techniques that yield both low thresholds and linear minimum distance growth are introduced by way of example throughout. The paper also examines implementation strategies for high throughput decoding derived from first principles of belief propagation on bipartite graphs.  相似文献   

10.
Gérard Battail 《电信纪事》1989,44(7-8):392-404
The conventional criterion of minimum distance is shown to be irrelevant to long block codes. As a criterion better fitted to a code of large length n, we propose the cross-entropy of its normalized weight distribution with respect to the binomial distribution of an n-uple resulting from an independent equally distributed random choice of each of its symbols. The iterated product of parity-check codes appears as a means for designing long codes by decimation among the whole set of the n-uples which is good for this criterion although poor in terms of its minimum distance. Computed results and those from simulated decoding are presented and compared with theoretical limits. They are shown to be close to the channel cutoff rate R0 .  相似文献   

11.
The complexity of brute-force encoding of low-density parity-check (LDPC) codes is proportional to the square value of the block length. Richardson and Urbanke have proposed efficient encoding algorithms for LDPC codes. These algorithms permute the parity-check matrix of the code iteratively, such that it becomes approximately lower triangular. We propose a new approach for efficient encoding of LDPC codes in which we modify the code ensemble to force an approximate lower triangular structure, thus eliminating the need to apply the algorithms of Richardson and Urbanke in this ensemble. We prove that the new ensemble has the same asymptotic threshold as the corresponding standard ensemble. The new ensemble can be used for linear time encoding of an arbitrary code profile. Computer simulations confirm that the performances of the standard and new ensembles are also very similar when using finite length codes  相似文献   

12.
We propose an algorithm for bounded minimum distance decoding of (partial) unit memory codes up to half the “designed” extended row distance. It makes use of a reduced trellis with the nodes found by bounded minimum distance decoding of the block codes used in the unit memory code. The results can be extended to general multimemory codes. The complexity of this algorithm is upper bounded by 2(d¯ 1r-2dα) times the complexity of the bounded minimum distance decoder of the block codes in the unit memory code. Here dα is the linear increase of the designed extended row distance d¯ir  相似文献   

13.
New upper bounds on the rate of low-density parity-check (LDPC) codes as a function of the minimum distance of the code are derived. The bounds apply to regular LDPC codes, and sometimes also to right-regular LDPC codes. Their derivation is based on combinatorial arguments and linear programming. The new bounds improve upon the previous bounds due to Burshtein et al. It is proved that at least for high rates, regular LDPC codes with full-rank parity-check matrices have worse relative minimum distance than the one guaranteed by the Gilbert-Varshamov bound.  相似文献   

14.
It was suggested by Battail that a good long linear code should have a weight distribution close to that of random coding, rather than a large minimum distance, and a turbo code should be also designed using a random-like criterion. In this paper, we first show that the weight distribution of a high-rate linear block code is approximately Gaussian if the code rate is close enough to one, and then proceed to construct a low-rate linear block code with approximately Gaussian weight distribution by using the turbo-coding technique. We give a sufficient condition under which the weight distribution of multicomponent turbo block (MCTB) codes (multicomponent product (MCP) codes, respectively) can approach asymptotically that of random codes, and further develop two classes of MCTB codes (MCP codes) satisfying this condition. Simulation results show that MCTB codes (MCP codes) having asymptotically Gaussian weight distribution can asymptotically approach Shannon's capacity limit. MCTB codes based on single parity-check (SPC) codes have a far poorer minimum distance than MCP codes based on SPC codes, but we show by simulation that when the bit-error rate is in the important range of 10/sup -1/-10/sup -5/, these codes can still offer similar performance for the additive white Gaussian noise channel, as long as the code length of the SPC codes is not very short. These facts confirm in a more precise way Battail's inference about the "nonimportance" of the minimum distance for a long code.  相似文献   

15.
We study the decoding problem when a binary linear perfect or quasi-perfect code is transmitted over a binary channel with additive Markov noise. After examining the properties of the channel block transition distribution, we derive sufficient conditions under which strict maximum-likelihood decoding is equivalent to strict minimum Hamming distance decoding when the code is perfect. Additionally, we show a near equivalence relationship between strict maximum likelihood and strict minimum distance decoding for quasi-perfect codes for a range of channel parameters and the code's minimum distance. As a result, an improved (complete) minimum distance decoder is proposed and simulations illustrating its benefits are provided.  相似文献   

16.
A novel analytical approach to performance evaluation of soft-decoding algorithms for binary linear block codes based on probabilistic iterative error correction is presented. A convergence condition establishing the critical noise rate below which the expected bit-error probability tends to zero is theoretically derived. It explains the capability of iterative probabilistic decoding of binary linear block codes with sparse parity-check matrices to correct, with probability close to one, error patterns with the number of errors (far) beyond half the code minimum distance. Systematic experiments conducted on truncated simplex codes seem to agree well with the convergence condition. The method may also be interesting for the theoretical analysis of the so-called turbo codes  相似文献   

17.
Efficient encoding of low-density parity-check codes   总被引:29,自引:0,他引:29  
Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. We consider the encoding problem for LDPC codes. More generally we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. For the (3,6)-regular LDPC code, for example, the complexity of encoding is essentially quadratic in the block length. However, we show that the associated coefficient can be made quite small, so that encoding codes even of length n≃100000 is still quite practical. More importantly, we show that “optimized” codes actually admit linear time encoding  相似文献   

18.
LDPC block and convolutional codes based on circulant matrices   总被引:18,自引:0,他引:18  
A class of algebraically structured quasi-cyclic (QC) low-density parity-check (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse parity-check matrices comprised of blocks of circulant matrices. The sparse parity-check representation allows for practical graph-based iterative message-passing decoding. Based on the algebraic structure, bounds on the girth and minimum distance of the codes are found, and several possible encoding techniques are described. The performance of the QC LDPC block codes compares favorably with that of randomly constructed LDPC codes for short to moderate block lengths. The performance of the LDPC convolutional codes is superior to that of the QC codes on which they are based; this performance is the limiting performance obtained by increasing the circulant size of the base QC code. Finally, a continuous decoding procedure for the LDPC convolutional codes is described.  相似文献   

19.
We show that for low-density parity-check (LDPC) codes whose Tanner graphs have sufficient expansion, the linear programming (LP) decoder of Feldman, Karger, and Wainwright can correct a constant fraction of errors. A random graph will have sufficient expansion with high probability, and recent work shows that such graphs can be constructed efficiently. A key element of our method is the use of a dual witness: a zero-valued dual solution to the decoding linear program whose existence proves decoding success. We show that as long as no more than a certain constant fraction of the bits are flipped by the channel, we can find a dual witness. This new method can be used for proving bounds on the performance of any LP decoder, even in a probabilistic setting. Our result implies that the word error rate of the LP decoder decreases exponentially in the code length under the binary-symmetric channel (BSC). This is the first such error bound for LDPC codes using an analysis based on "pseudocodewords." Recent work by Koetter and Vontobel shows that LP decoding and min-sum decoding of LDPC codes are closely related by the "graph cover" structure of their pseudocodewords; in their terminology, our result implies that that there exist families of LDPC codes where the minimum BSC pseudoweight grows linearly in the block length  相似文献   

20.
Using linear programming to Decode Binary linear codes   总被引:3,自引:0,他引:3  
A new method is given for performing approximate maximum-likelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor graph or parity-check representation of the code. The resulting "LP decoder" generalizes our previous work on turbo-like codes. A precise combinatorial characterization of when the LP decoder succeeds is provided, based on pseudocodewords associated with the factor graph. Our definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers." The fractional distance d/sub frac/ of a code is introduced, which is a lower bound on the classical distance. It is shown that the efficient LP decoder will correct up to /spl lceil/d/sub frac//2/spl rceil/-1 errors and that there are codes with d/sub frac/=/spl Omega/(n/sup 1-/spl epsi//). An efficient algorithm to compute the fractional distance is presented. Experimental evidence shows a similar performance on low-density parity-check (LDPC) codes between LP decoding and the min-sum and sum-product algorithms. Methods for tightening the LP relaxation to improve performance are also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号