首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Let a q-ary linear (n,k)-code be used over a memoryless channel. We design a soft-decision decoding algorithm that tries to locate a few most probable error patterns on a shorter length s ∈ [k,n]. First, we take s cyclically consecutive positions starting from any initial point. Then we cut the subinterval of length s into two parts and examine T most plausible error patterns on either part. To obtain codewords of a punctured (s,k)-code, we try to match the syndromes of both parts. Finally, the designed codewords of an (s,k)-code are re-encoded to find the most probable codeword on the full length n. For any long linear code, the decoding error probability of this algorithm can be made arbitrarily close to the probability of its maximum-likelihood (ML) decoding given sufficiently large T. By optimizing s, we prove that this near-ML decoding can be achieved by using only T≈q(n-k)k(n+k)/ error patterns. For most long linear codes, this optimization also gives about T re-encoded codewords. As a result, we obtain the lowest complexity order of q(n-k)k(n+k)/ known to date for near-ML decoding. For codes of rate 1/2, the new bound grows as a cubic root of the general trellis complexity qmin{n-k,k}. For short blocks of length 63, the algorithm reduces the complexity of the trellis design by a few decimal orders  相似文献   

2.
An efficient algorithm for calculating the ith bit error probability of a binary linear code over the binary symmetric channel (BSC) is presented. It is proved that the exact ith bit error probability of maximum-likelihood (ML) decoding, bounded distance decoding, and symbol-wise maximum a posteriori probability (MAP) decoding can be obtained with time complexity O(n2/sup n-k/), where n and k denote the length and the dimension of the target code. The proposed methods are applicable to any binary linear code with redundancy up to nearly 25-30 bits with a typical personal computer.  相似文献   

3.
We suggest a decoding algorithm of q-ary linear codes, which we call supercode decoding. It ensures the error probability that approaches the error probability of minimum-distance decoding as the length of the code grows. For n→∞ the algorithm has the maximum-likelihood performance. The asymptotic complexity of supercode decoding is exponentially smaller than the complexity of all other methods known. The algorithm develops the ideas of covering-set decoding and split syndrome decoding  相似文献   

4.
In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes. 1) Given n distinct elements alpha1,...,alphan from a field F, and n subsets S1,...,Sn of F, each of size at most l, the list decoding algorithm of Guruswami and Sudan can in polynomial time output all polynomials p of degree at most k that satisfy p(alphai)isinSi for every i, as long as ldelta for small enough delta, we exhibit an explicit received word with a superpolynomial number of Reed-Solomon codewords that agree with it on (2-epsi)k locations, for any desired epsi>0 (agreement of k is trivial to achieve). Such a bound was known earlier only for a nonexplicit center. Finding explicit bad list decoding configurations is of significant interest-for example, the best known rate versus distance tradeoff, due to Xing, is based on a bad list decoding configuration for algebraic-geometric codes, which is unfortunately not explicitly known  相似文献   

5.
Minimum distance decoding (MDD) for a general error-correcting linear code is a hard computational problem that recently has been shown to beNP-hard. The complexity of known decoding algorithms is determined bymin (2^{k},2^{n-k}), wherenis the code length andkis the number of information digits. Two new algorithms are suggested that reduce substantially the complexity of MDD. The algorithms use a new concept of zero neighbors--a special set of codewords. Only these codewords (which can be computed in advance) should be stored and used in the decoding procedure. The number of zero neighbors is shown to be very small compared withmin (2^{k},2^{n-k})forn gg 1and a wide range of code ratesR = k/n. For example, forR approx 0.5this number grows approximately as a square root of the number of codewords.  相似文献   

6.
Algorithms for the soft-decision decoding of linear block codes are presented. These algorithms perform a reduced complexity search through a trellis derived from the parity check matrix of an(n, k)linear block code. The computational complexity of the algorithms is considerably reduced from that of a full maximum-likelihood algorithm. We demonstrate the trade-off between complexity and efficiency of the algorithms through computer simulation.  相似文献   

7.
The Z4-linear Goethals-like code of length 2m has 22m+1-3m-2 codewords and minimum Lee distance 8 for any odd integer m⩾3. We present an algebraic decoding algorithm for all Z4-linear Goethals-like codes Ck introduced by Helleseth et al.(1995, 1996). We use Dickson polynomials and their properties to solve the syndrome equations  相似文献   

8.
The overall number of nearest neighbors in bounded distance decoding (BDD) algorithms is given by N0,eff=N0+N BDD. Where NBDD denotes the number of additional, non-codeword, neighbors that are generated during the (suboptimal) decoding process. We identify and enumerate the nearest neighbors associated with the original generalized minimum distance (GMD) and Chase (1972) decoding algorithms. After careful examination of the decision regions of these algorithms, we derive an approximated probability ratio between the error contribution of a noncodeword neighbor (one of NBDD points) and a codeword nearest neighbor. For Chase algorithm 1 it is shown that the contribution to the error probability of a noncodeword nearest neighbor is a factor of 2d-1 less than the contribution of a codeword, while for Chase algorithm 2 the factor is 2[d/2]-1, d being the minimum Hamming distance of the code. For Chase algorithm 3 and GMD, a recursive procedure for calculating this ratio, which turns out to be nonexponential in d, is presented. This procedure can also be used for specifically identifying the error patterns associated with Chase algorithm 3 and GMD. Utilizing the probability ratio, we propose an improved approximated upper bound on the probability of error based on the union bound approach. Simulation results are given to demonstrate and support the analytical derivations  相似文献   

9.
We propose an algorithm for bounded minimum distance decoding of (partial) unit memory codes up to half the “designed” extended row distance. It makes use of a reduced trellis with the nodes found by bounded minimum distance decoding of the block codes used in the unit memory code. The results can be extended to general multimemory codes. The complexity of this algorithm is upper bounded by 2(d¯ 1r-2dα) times the complexity of the bounded minimum distance decoder of the block codes in the unit memory code. Here dα is the linear increase of the designed extended row distance d¯ir  相似文献   

10.
We present a new soft-decision majority decoding algorithm for Reed-Muller codes RM(r,m). First, the reliabilities of 2m transmitted symbols are recalculated into the reliabilities of 2m-r parity checks that represent each information bit. In turn, information bits are obtained by the weighted majority that gives more weight to more reliable parity checks. It is proven that for long low-rate codes RM(r,m), our soft-decision algorithm outperforms its conventional hard-decision counterpart by 10 log10(π/2)≈2 dB at any given output error probability. For fixed code rate R and m→∞, our algorithm increases almost 2r/2 times the correcting capability of soft-decision bounded distance decoding  相似文献   

11.
The error locations for an algebraic-geometric code C*(D,mP) are exactly the common zeros (that is, a projective variety V(I)) of a set (ideal) I of error-locator functions. The paper gives a one-dimensional Berlekamp-Massey version of the Feng-Rao (1993) algorithm for decoding algebraic-geometric codes C*(D,mP). This produces a generating set for I (as an ideal) of size at most ρ (the smallest positive pole order at P of any function in L(mP)) relative to any error of weight at most e<½δm*, with δm*:=m-2g+2 the designed minimum distance of the code. This algorithm requires at most c(ρm2+Nρm+ρ2m) field multiplications, with c a small constant, and N a small constant function of the curve. The error-positions are then given as exactly the common zeros of generator functions of the error-locator ideal I  相似文献   

12.
In this paper, we introduce stopping sets for iterative row-column decoding of product codes using optimal constituent decoders. When transmitting over the binary erasure channel (BEC), iterative row-column decoding of product codes using optimal constituent decoders will either be successful, or stop in the unique maximum-size stopping set that is contained in the (initial) set of erased positions. Let Cp denote the product code of two binary linear codes Cc and Cr of minimum distances dc and dr and second generalized Hamming weights d2(Cc) and d2(Cr), respectively. We show that the size smin of the smallest noncode- word stopping set is at least mm(drd2(Cc),dcd2(Cr)) > drdc, where the inequality follows from the Griesmer bound. If there are no codewords in Cp with support set S, where S is a stopping set, then S is said to be a noncodeword stopping set. An immediate consequence is that the erasure probability after iterative row-column decoding using optimal constituent decoders of (finite-length) product codes on the BEC, approaches the erasure probability after maximum-likelihood decoding as the channel erasure probability decreases. We also give an explicit formula for the number of noncodeword stopping sets of size smin, which depends only on the first nonzero coefficient of the constituent (row and column) first and second support weight enumerators, for the case when d2(Cr) < 2dr and d2(Cc) < 2dc. Finally, as an example, we apply the derived results to the product of two (extended) Hamming codes and two Golay codes.  相似文献   

13.
Generalized minimum-distance (GMD) decoding is a standard soft-decoding method for block codes. We derive an efficient general GMD decoding scheme for linear block codes in the framework of error-correcting pairs. Special attention is paid to Reed-Solomon (RS) codes and one-point algebraic-geometry (AG) codes. For RS codes of length n and minimum Hamming distance d the GMD decoding complexity turns out to be in the order O(nd), where the complexity is counted as the number of multiplications in the field of concern. For AG codes the GMD decoding complexity is highly dependent on the curve in consideration. It is shown that we can find all relevant error-erasure-locating functions with complexity O(o1nd), where o1 is the size of the first nongap in the function space associated with the code. A full GMD decoding procedure for a one-point AG code can be performed with complexity O(dn2)  相似文献   

14.
This paper is devoted to a Shannon-theoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are "good" in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a positive number γ0 such that for any binary-input memoryless channel whose Bhattacharyya noise parameter is less than γ0, the average maximum-likelihood (ML) decoder block error probability approaches zero, at least as fast as n , where β is the "interleaver gain" exponent defined by Benedetto et al. in 1996  相似文献   

15.
Because each golden code codeword conveys four information symbols from an M-ary QAM alphabet, the complexity of an exhaustive-search decoder is proportional to M4. In this paper we prove that the golden code is fast-decodable, meaning that maximum-likelihood decoding is possible with a worst-case complexity proportional to only M2.5. The golden code retains its fast-decodable property regardless of whether the channel varies with time. We also present an efficient implementation of a fast maximum-likelihood decoder that exhibits a low average complexity.  相似文献   

16.
A new analysis of the computational effort and the error probability of sequential decoding is presented, which is based entirely on the distance properties of a particular convolutional code and employs no random-coding arguments. An upper bound on the computational distributionP(C_{t}>N_{t})for a specific time-invariant code is derived, which decreases exponentially with the column distance of the code. It is proved that rapid column-distance growth minimizes the decoding effort and therefore also the probability of decoding failure or erasure. In an analogous way, the undetected error probability of sequential decoding with a particular fixed code is proved to decrease exponentially with the free distance and to increase linearly with the number of minimum free-weight codewords. This analysis proves that code construction for sequential decoding should maximize column-distance growth and free distance in order to guarantee fast decoding, a minimum erasure probability, and a low undetected error probability.  相似文献   

17.
Certain notorious nonlinear binary codes contain more codewords than any known linear code. These include the codes constructed by Nordstrom-Robinson (1967), Kerdock (1972), Preparata (1968), Goethals (1974), and Delsarte-Goethals (1975). It is shown here that all these codes can be very simply constructed as binary images under the Gray map of linear codes over Z4, the integers mod 4 (although this requires a slight modification of the Preparata and Goethals codes). The construction implies that all these binary codes are distance invariant. Duality in the Z4 domain implies that the binary images have dual weight distributions. The Kerdock and “Preparata” codes are duals over Z4-and the Nordstrom-Robinson code is self-dual-which explains why their weight distributions are dual to each other. The Kerdock and “Preparata” codes are Z4-analogues of first-order Reed-Muller and extended Hamming codes, respectively. All these codes are extended cyclic codes over Z4, which greatly simplifies encoding and decoding. An algebraic hard-decision decoding algorithm is given for the “Preparata” code and a Hadamard-transform soft-decision decoding algorithm for the I(Kerdock code. Binary first- and second-order Reed-Muller codes are also linear over Z4 , but extended Hamming codes of length n⩾32 and the Golay code are not. Using Z4-linearity, a new family of distance regular graphs are constructed on the cosets of the “Preparata” code  相似文献   

18.
A serially concatenated code with interleaver consists of the cascade of an outer encoder, an interleaver permuting the outer codewords bits, and an inner encoder whose input words are the permuted outer codewords. The construction can be generalized to h cascaded encoders separated by h-1 interleavers. We obtain upper bounds to the average maximum-likelihood bit error probability of serially concatenated block and convolutional coding schemes. Then, we derive design guidelines for the outer and inner encoders that maximize the interleaver gain and the asymptotic slope of the error probability curves. Finally, we propose a new, low-complexity iterative decoding algorithm. Throughout the paper, extensive comparisons with parallel concatenated convolutional codes known as “turbo codes” are performed, showing that the new scheme can offer superior performance  相似文献   

19.
A code C detects error e with probability 1-Q(e),ifQ(e) is a fraction of codewords y such that y, y+e/spl isin/C. We present a class of optimal nonlinear q-ary systematic (n, q/sup k/)-codes (robust codes) minimizing over all (n, q/sup k/)-codes the maximum of Q(e) for nonzero e. We also show that any linear (n, q/sup k/)-code V with n /spl les/2k can be modified into a nonlinear (n, q/sup k/)-code C/sub v/ with simple encoding and decoding procedures, such that the set E={e|Q(e)=1} of undetected errors for C/sub v/ is a (k-r)-dimensional subspace of V (|E|=q/sup k-r/ instead of q/sup k/ for V). For the remaining q/sup n/-q/sup k-r/ nonzero errors, Q(e)/spl les/q/sup -r/for q/spl ges/3 and Q(e)/spl les/ 2/sup -r+1/ for q=2.  相似文献   

20.
Optimal sectionalization of a trellis   总被引:1,自引:0,他引:1  
While the complexity of trellis decoding for a given block code is essentially a function of the number of states and branches in its trellis, the decoding complexity may be often reduced by means of an appropriate sectionalization of the trellis. Notwithstanding the many examples of such sectionalizations for particular codes that appeared in the literature, no systematic method for finding the best sectionalization of a given trellis is presently known. We present a polynomial-time algorithm which produces the optimal sectionalization of a given trellis T in time O(n2), where n is the length of the code generated by T. The algorithm is developed in a general setting of certain operations and functions defined on the set of trellises; it therefore applies to both linear and nonlinear codes, and easily accommodates a broad range of optimality criteria. The particular optimality criterion based on minimizing the total number of additions and comparisons required for maximum-likelihood trellis decoding is investigated in detail: several different methods for decoding a given trellis are discussed and compared in a number of examples. Finally, analysis of the dynamical properties of certain optimal sectionalizations is presented  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号