共查询到20条相似文献,搜索用时 15 毫秒
1.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》2009,55(9):4155-4166
2.
Rosnes E. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2008,54(4):1551-1560
In this paper, we introduce stopping sets for iterative row-column decoding of product codes using optimal constituent decoders. When transmitting over the binary erasure channel (BEC), iterative row-column decoding of product codes using optimal constituent decoders will either be successful, or stop in the unique maximum-size stopping set that is contained in the (initial) set of erased positions. Let Cp denote the product code of two binary linear codes Cc and Cr of minimum distances dc and dr and second generalized Hamming weights d2(Cc) and d2(Cr), respectively. We show that the size smin of the smallest noncode- word stopping set is at least mm(drd2(Cc),dcd2(Cr)) > drdc, where the inequality follows from the Griesmer bound. If there are no codewords in Cp with support set S, where S is a stopping set, then S is said to be a noncodeword stopping set. An immediate consequence is that the erasure probability after iterative row-column decoding using optimal constituent decoders of (finite-length) product codes on the BEC, approaches the erasure probability after maximum-likelihood decoding as the channel erasure probability decreases. We also give an explicit formula for the number of noncodeword stopping sets of size smin, which depends only on the first nonzero coefficient of the constituent (row and column) first and second support weight enumerators, for the case when d2(Cr) < 2dr and d2(Cc) < 2dc. Finally, as an example, we apply the derived results to the product of two (extended) Hamming codes and two Golay codes. 相似文献
3.
Etzion T. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2006,52(11):4867-4879
The stopping redundancy of the code is an important parameter which arises from analyzing the performance of a linear code under iterative decoding on a binary erasure channel. In this paper, we will consider the stopping redundancy of Reed-Muller codes and related codes. Let R(lscr,m) be the Reed-Muller code of length 2m and order lscr. Schwartz and Vardy gave a recursive construction of parity-check matrices for the Reed-Muller codes, and asked whether the number of rows in those parity-check matrices is the stopping redundancy of the codes. We prove that the stopping redundancy of R(m-2,m), which is also the extended Hamming code of length 2m, is 2m-1 and thus show that the recursive bound is tight in this case. We prove that the stopping redundancy of the simplex code equals its redundancy. Several constructions of codes for which the stopping redundancy equals the redundancy are discussed. We prove an upper bound on the stopping redundancy of R(1,m). This bound is better than the known recursive bound and thus gives a negative answer to the question of Schwartz and Vardy 相似文献
4.
针对极化码译码延迟较高的问题,该文提出了一种针对置信度传播算法的早期停止准则,通过监测码字估值(x)的收敛性来终止译码.该准则利用高斯近似分析选取码字中Q个出错概率较小的比特构成比较空间,由于比较的位数较少,且仅采用异或和或运算,其计算复杂度较低.与基于信息序列估值(u)的方案不同,提出的准则在计算(u)之前已完成检测,不会导致额外的译码延迟.仿真和FPGA综合结果表明:该准则相对于G-Matrix,最坏信息位(WIB)和冻结位误码率(FBER)可有效节省硬件资源;当最大迭代次数设置为40次时,相比于G-Matrix准则,复杂度下降的代价是平均迭代次数在3.5 dB处上升了29.98%,相比于WIB和FBER方案,平均迭代次数分别减少39.44%和27.67%. 相似文献
5.
Guruswami V. Rudra A. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2008,54(1):135-150
In this paper, we present error-correcting codes that achieve the information-theoretically best possible tradeoff between the rate and error-correction radius. Specifically, for every 0 < R < 1 and epsiv < 0, we present an explicit construction of error-correcting codes of rate that can be list decoded in polynomial time up to a fraction (1- R - epsiv) of worst-case errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are folded Reed-Solomon codes, which are in fact exactly Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in phased bursts. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on epsiv) using existing ideas concerning ldquolist recoveryrdquo and expander-based codes. Concatenating the folded RS codes with suitable inner codes, we get binary codes that can be efficiently decoded up to twice the radius achieved by the standard GMD decoding. 相似文献
6.
Changlong Xu Ying-Chang Liang Wing Seng Leon 《Wireless Communications, IEEE Transactions on》2008,7(1):43-47
In this letter, we propose a low complexity algorithm for extended turbo product codes by considering both the encoding and decoding aspects. For the encoding part, a new encoding scheme is presented for which the operations of looking up and fetching error patterns are no longer necessary, and thus the lookup table can be omitted. For the decoder, a new algorithm is proposed to extract the extrinsic information and reduce the redundancy. This new algorithm can reduce decoding complexity greatly and enhance the performance of the decoder. Simulation results are presented to show the effectiveness of the proposed scheme. 相似文献
7.
一类循环码的神经网络软判决译码算法 总被引:2,自引:0,他引:2
本文分析了一类循环码的结构特性,提出了这类循环码的神经网络软判决译码算法。新算法的复杂度比现有一般的神经网络译码算法要低得多,而其译码性能接近大似然译码。 相似文献
8.
The classical Viterbi decoder recursively finds the trellis path (code word) closest to the received data. Given the received data, the syndrome decoder first forms a syndrome, instead. A recursive algorithm like Viterbi's is used to determine the noise sequence of minimum Hamming weight that can be a possible cause of this syndrome. Given the estimate of the noise sequence, one derives an estimate of the original data sequence. While the bit error probability of the syndrome decoder is no different from that of the classical Viterbi decoder, the syndrome decoder can be implemented using a read only memory (ROM), thus obtaining a considerable saving in hardware. 相似文献
9.
《Communications, IEEE Transactions on》2005,53(7):1232-1232
Reduced-Complexity Decoding of LDPC Codes Various log-likelihood-ratio-based belief-propagation (LLR- BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representation of the decoding computations are shown to achieve a reduction in complexity, by simplifying the check-node update or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from a performance, latency, computational complexity, and memory-requirement perspective. 相似文献
10.
Golay码的快速译码 总被引:2,自引:0,他引:2
本文利用Golay码的代数结构给出了二元(23,12,7)Golay码及三元(11,6,5)Golay码新的译码算法。对于二元Golay码,所提的算法的最坏时间复杂性为534次mod2加法,比已知的同类译码算法的时间复杂性都小;平均时间复杂性为224次mod2加法,比目前已知的最快的译码算法的平均时间复杂性279次mod2加法还要小。对于三元Golay码,所提算法的最坏时间复杂性为123次mod3加法,平均时间复杂性为85次mod3加法,比同类的算法都快。此外,这里给出的算法结构简单,易于实现。 相似文献
11.
本文给出了几何广义RS码的一种有效译码算法,该算法可对任意错误个数不超过「(d-1)/2」的接收码字进行译码,其复杂度仅为O(n^3)。 相似文献
12.
In this paper, we propose a new reduced-complexity decoding algorithm of Low-Density Parity-Check (LDPC) codes, called Belief-Propagation-Approximated (BPA) algorithm, which utilizes the idea of normalization and translates approximately the intricate nonlinear operation in the check nodes of the original BP algorithm to only one operation of looking up the table. The normalization factors can be obtained by simulation, or theoretically. Simulation results demonstrate that BPA algorithm exhibits fairly satisfactory bit error performance on the Additive White Gaussian Noise (AWGN) channel. 相似文献
13.
It is shown that a convolutional code can be decoded with a sliding-block decoder, a time-invariant nonlinear digital filter with finite memory and delay. If the memory and delay are small, then the sliding-block decoder can be implemented as a table lookup procedure in ROM, resulting in a low cost, high speed, and high reliability decoder. 相似文献
14.
本文提出一种在形式上类似于卷积码的序列译码的一般线性分组码的软判决伪序列译码算法,利用广义限译码原理及二元有向树的性质与分枝限搜索技术,降低了译码复杂性,其设备复杂度小于Chase译码器,模拟结果表明,该算法的误码输出性能接近维持比较最大似然译码,好于ChaseⅡ算法,且译码速度与ChaseⅡ算法接近。 相似文献
15.
16.
采用串行消息传递策略,文中提出了LDPC卷积码的一种改进的流水线式译码器.分析和仿真结果均表明改进的译码器在不增加单个处理器计算复杂度的前提下,仅通过改变消息传递方式就能够大大加速译码收敛速率,与原译码器相比大约可以节省一半的处理器. 相似文献
17.
A new practical method for decoding low-density parity check (LDPC) codes is presented. The followed approach involves reformulating the parity check equations using nonlinear functions of a specific form, defined over Rrho, where rho denotes the check node degree. By constraining the inputs to these functions in the closed convex subset [0,1]rho ("box" set) of Rrho, and also by exploiting their form, a multimodal objective function that entails the code constraints is formulated. The gradient projection algorithm is then used for searching for a valid codeword that lies in the vicinity of the channel observation. The computational complexity of the new decoding technique is practically sub-linearly dependent on the code's length, while processing on each variable node can be performed in parallel allowing very low decoding latencies. Simulation results show that convergence is achieved within 10 iterations, although some performance degradations relative to the belief propagation (BP) algorithm are observed 相似文献
18.
Straightforward implementation of a maximum likelihood decoder implies a complexity that grows algebraically with the inverse of error probability. Forney has suggested an approach, concatenation, for which error probability decreases exponentially with increasing complexity. This paper presents the results of an evaluation of a particular concatenation system, structurally similar to the hybrid system of Falconer, employing a Reed-Solomon outer code and an inner convolutional code. The inner decoder is a Viterbi decoder of constraint length less than the corresponding encoding constraint length (nonmaximum likelihood). The outer decoder assumes one of three possible forms, all employing the likelihood information developed by the inner decoder to assist in outer decoding. Error corrections and erasure fill-ins achieved by the outer decoder are fed back to the inner decoder. Performance is evaluated through computer simulation. The three outer decoders are found to provide approximately the same performance, all yielding low error probabilities at rates somewhat above Rcomp of sequential decoding and at signal energy to noise density ratios per information bit around 1.7 dB. 相似文献
19.
Analysis of Cubic Permutation Polynomials for Turbo Codes 总被引:1,自引:0,他引:1
Quadratic permutation polynomials (QPPs) have been widely studied and used as interleavers in turbo codes. However, less attention has been given to cubic permutation polynomials (CPPs). This paper proves a theorem which states sufficient and necessary conditions for a cubic permutation polynomial to be a null permutation polynomial. The result is used to reduce the search complexity of CPP interleavers for short lengths (multiples of 8, between 40 and 352), by improving the distance spectrum over the set of polynomials with the largest spreading factor. The comparison with QPP interleavers is made in terms of search complexity and upper bounds of the bit error rate (BER) and frame error rate (FER) for AWGN and for independent fading Rayleigh channels. Cubic permutation polynomials leading to better performance than quadratic permutation polynomials are found for some lengths. 相似文献
20.
针对Turbo码系统译码复杂度大、系统延时较大的缺点,我们主要分析了系统延时的原因和现有的迭代判据,并用符号变化比作为迭代提前结束的判据,实现了降低译码复杂度和译码延时的目的.通过仿真比较了它与其它判据的优缺点. 相似文献