首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For a linear block code ${cal C}$, its stopping redundancy is defined as the smallest number of check nodes in a Tanner graph for ${cal C}$, such that there exist no stopping sets of size smaller than the minimum distance of ${cal C}{bf .},$ Schwartz and Vardy conjectured that the stopping redundancy of a maximum-distance separable (MDS) code should only depend on its length and minimum distance.   相似文献   

2.
Turbo码中迭代译码的迭代终止准则研究   总被引:2,自引:0,他引:2  
本文介绍了Turbo码中迭代译码的两种新的迭代终止准则,并通过仿真研究了这两种迭代终止准则的性能。这两种迭代终止准则都是基于互熵(CE)的概念,但是后一种比CE准则更简单,具有更低的计算复杂度。  相似文献   

3.
In this paper, we introduce stopping sets for iterative row-column decoding of product codes using optimal constituent decoders. When transmitting over the binary erasure channel (BEC), iterative row-column decoding of product codes using optimal constituent decoders will either be successful, or stop in the unique maximum-size stopping set that is contained in the (initial) set of erased positions. Let Cp denote the product code of two binary linear codes Cc and Cr of minimum distances dc and dr and second generalized Hamming weights d2(Cc) and d2(Cr), respectively. We show that the size smin of the smallest noncode- word stopping set is at least mm(drd2(Cc),dcd2(Cr)) > drdc, where the inequality follows from the Griesmer bound. If there are no codewords in Cp with support set S, where S is a stopping set, then S is said to be a noncodeword stopping set. An immediate consequence is that the erasure probability after iterative row-column decoding using optimal constituent decoders of (finite-length) product codes on the BEC, approaches the erasure probability after maximum-likelihood decoding as the channel erasure probability decreases. We also give an explicit formula for the number of noncodeword stopping sets of size smin, which depends only on the first nonzero coefficient of the constituent (row and column) first and second support weight enumerators, for the case when d2(Cr) < 2dr and d2(Cc) < 2dc. Finally, as an example, we apply the derived results to the product of two (extended) Hamming codes and two Golay codes.  相似文献   

4.
The stopping redundancy of the code is an important parameter which arises from analyzing the performance of a linear code under iterative decoding on a binary erasure channel. In this paper, we will consider the stopping redundancy of Reed-Muller codes and related codes. Let R(lscr,m) be the Reed-Muller code of length 2m and order lscr. Schwartz and Vardy gave a recursive construction of parity-check matrices for the Reed-Muller codes, and asked whether the number of rows in those parity-check matrices is the stopping redundancy of the codes. We prove that the stopping redundancy of R(m-2,m), which is also the extended Hamming code of length 2m, is 2m-1 and thus show that the recursive bound is tight in this case. We prove that the stopping redundancy of the simplex code equals its redundancy. Several constructions of codes for which the stopping redundancy equals the redundancy are discussed. We prove an upper bound on the stopping redundancy of R(1,m). This bound is better than the known recursive bound and thus gives a negative answer to the question of Schwartz and Vardy  相似文献   

5.
针对极化码译码延迟较高的问题,该文提出了一种针对置信度传播算法的早期停止准则,通过监测码字估值(x)的收敛性来终止译码.该准则利用高斯近似分析选取码字中Q个出错概率较小的比特构成比较空间,由于比较的位数较少,且仅采用异或和或运算,其计算复杂度较低.与基于信息序列估值(u)的方案不同,提出的准则在计算(u)之前已完成检测,不会导致额外的译码延迟.仿真和FPGA综合结果表明:该准则相对于G-Matrix,最坏信息位(WIB)和冻结位误码率(FBER)可有效节省硬件资源;当最大迭代次数设置为40次时,相比于G-Matrix准则,复杂度下降的代价是平均迭代次数在3.5 dB处上升了29.98%,相比于WIB和FBER方案,平均迭代次数分别减少39.44%和27.67%.  相似文献   

6.
针对极化码译码延迟较高的问题, 该文提出了一种针对置信度传播算法的早期停止准则,通过监测码字估值$\hat x$的收敛性来终止译码。该准则利用高斯近似分析选取码字中Q个出错概率较小的比特构成比较空间,由于比较的位数较少,且仅采用异或和或运算,其计算复杂度较低。与基于信息序列估值$\hat u$的方案不同,提出的准则在计算$\hat u$之前已完成检测,不会导致额外的译码延迟。仿真和FPGA综合结果表明: 该准则相对于G-Matrix, 最坏信息位(WIB)和冻结位误码率(FBER)可有效节省硬件资源;当最大迭代次数设置为40次时,相比于G-Matrix准则,复杂度下降的代价是平均迭代次数在3.5 dB处上升了29.98%,相比于WIB和FBER方案,平均迭代次数分别减少39.44%和27.67%。  相似文献   

7.
In this paper, we present error-correcting codes that achieve the information-theoretically best possible tradeoff between the rate and error-correction radius. Specifically, for every 0 < R < 1 and epsiv < 0, we present an explicit construction of error-correcting codes of rate that can be list decoded in polynomial time up to a fraction (1- R - epsiv) of worst-case errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are folded Reed-Solomon codes, which are in fact exactly Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in phased bursts. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on epsiv) using existing ideas concerning ldquolist recoveryrdquo and expander-based codes. Concatenating the folded RS codes with suitable inner codes, we get binary codes that can be efficiently decoded up to twice the radius achieved by the standard GMD decoding.  相似文献   

8.
一类循环码的神经网络软判决译码算法   总被引:2,自引:0,他引:2  
本文分析了一类循环码的结构特性,提出了这类循环码的神经网络软判决译码算法。新算法的复杂度比现有一般的神经网络译码算法要低得多,而其译码性能接近大似然译码。  相似文献   

9.
循环移位置换单元是准循环LDPC码的部分并行译码器的重要组成部分。该文研究并证明了Reverse Banyan交换结构在实现信息循环移位时各个基本交换单元的连接规律。基于该规律设计了基于可预置选路算法的无阻塞循环移位置换结构。相比Benes交换结构和Reverse Banyan交换结构,提高了信息循环移位交换的速率,且占用较少的硬件资源和面积。最后设计了一个出线转换单元,该单元适用于各种循环移位交换结构。  相似文献   

10.
A Low Complexity Decoding Algorithm for Extended Turbo Product Codes   总被引:1,自引:0,他引:1  
In this letter, we propose a low complexity algorithm for extended turbo product codes by considering both the encoding and decoding aspects. For the encoding part, a new encoding scheme is presented for which the operations of looking up and fetching error patterns are no longer necessary, and thus the lookup table can be omitted. For the decoder, a new algorithm is proposed to extract the extrinsic information and reduce the redundancy. This new algorithm can reduce decoding complexity greatly and enhance the performance of the decoder. Simulation results are presented to show the effectiveness of the proposed scheme.  相似文献   

11.
为了降低多进制低密度奇偶校验(Low-Density Parity-Check,LDPC)码译码算法的复杂度,该文提出了基于新停止准则的符号翻转译码算法。该算法根据翻转函数和接收比特可靠性度量来确定对应的翻转符号,通过分析不满足校验方程个数的变化趋势来提前终止迭代。仿真结果表明,新算法在保持原有符号翻转译码算法误码性能不变的情况下,极大地减少了译码迭代次数,取得了译码性能和复杂度的折衷。  相似文献   

12.
针对极化码连续取消列表(SCL)译码算法为获取较好性能而采用较多的保留路径数,导致译码复杂度较高的缺点,自适应SCL译码算法虽然在高信噪比下降低了一定的计算量,却带来了较高的译码延时。根据极化码的顺序译码结构,该文提出了一种分段循环冗余校验(CRC)与自适应选择保留路径数量相结合的SCL译码算法。仿真结果表明,与传统CRC辅助SCL译码算法、自适应SCL译码算法相比,该算法在码率R=0.5时,低信噪比下(–1 dB)复杂度降低了约21.6%,在高信噪比下(3 dB)复杂度降低了约64%,同时获得较好的译码性能。  相似文献   

13.
级联多个循环冗余校验(CRC)的LDPC译码算法有效地改善了译码的收敛特性。然而在其译码算法中,当CRC检测的整体漏检概率不够低时,出现误码平台。因此,该文提出了改进算法,通过减少在译码算法中CRC检测的次数,降低整体漏检概率,提高了误码性能。仿真表明改进的算法提高了误码性能,译码复杂度也增加不大。  相似文献   

14.
为克服多元LDPC码的扩展最小和(Extended Min-Sum, EMS)译码算法中对数似然比(Log Likelihood Ratio, LLR)生成及排序复杂度过高的问题,该文针对以BPSK为调制方式的编码调制系统,提出一种快速而简单的LLR生成算法。该算法采用一种低复杂度的迭代计算方式,可快速生成并排序LLR,适用于硬件实现的流水线结构,能够加速译码器的译码速度并提高译码器吞吐量。仿真结果表明:所提出算法对译码性能基本没有影响且极大降低LLR计算的复杂度,是一种适用于高速多元LDPC译码器前端实现的候选算法。  相似文献   

15.
针对RS码与LDPC码的串行级联结构,提出了一种基于自适应置信传播(ABP)的联合迭代译码方法.译码时,LDPC码置信传播译码器输出的软信息作为RS码ABP译码器的输入;经过一定迭代译码后,RS码译码器输出的软信息又作为LDPC译码器的输入.软输入软输出的RS译码器与LDPC译码器之间经过多次信息传递,译码性能有很大提高.码长中等的LDPC码采用这种级联方案,可以有效克服短环的影响,消除错误平层.仿真结果显示:AWGN信道下这种基于ABP的RS码与LDPC码的联合迭代译码方案可以获得约0.8 dB的增益.  相似文献   

16.
Reduced-Complexity Decoding of LDPC Codes Various log-likelihood-ratio-based belief-propagation (LLR- BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representation of the decoding computations are shown to achieve a reduction in complexity, by simplifying the check-node update or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from a performance, latency, computational complexity, and memory-requirement perspective.  相似文献   

17.
Golay码的快速译码   总被引:2,自引:0,他引:2  
马建峰  王育民 《通信学报》1996,17(4):130-135
本文利用Golay码的代数结构给出了二元(23,12,7)Golay码及三元(11,6,5)Golay码新的译码算法。对于二元Golay码,所提的算法的最坏时间复杂性为534次mod2加法,比已知的同类译码算法的时间复杂性都小;平均时间复杂性为224次mod2加法,比目前已知的最快的译码算法的平均时间复杂性279次mod2加法还要小。对于三元Golay码,所提算法的最坏时间复杂性为123次mod3加法,平均时间复杂性为85次mod3加法,比同类的算法都快。此外,这里给出的算法结构简单,易于实现。  相似文献   

18.
The classical Viterbi decoder recursively finds the trellis path (code word) closest to the received data. Given the received data, the syndrome decoder first forms a syndrome, instead. A recursive algorithm like Viterbi's is used to determine the noise sequence of minimum Hamming weight that can be a possible cause of this syndrome. Given the estimate of the noise sequence, one derives an estimate of the original data sequence. While the bit error probability of the syndrome decoder is no different from that of the classical Viterbi decoder, the syndrome decoder can be implemented using a read only memory (ROM), thus obtaining a considerable saving in hardware.  相似文献   

19.
喻建平  王新梅 《电子学报》1996,24(7):110-113
本文提出一种在形式上类似于卷积码的序列译码的一般线性分组码的软判决伪序列译码算法,利用广义限译码原理及二元有向树的性质与分枝限搜索技术,降低了译码复杂性,其设备复杂度小于Chase译码器,模拟结果表明,该算法的误码输出性能接近维持比较最大似然译码,好于ChaseⅡ算法,且译码速度与ChaseⅡ算法接近。  相似文献   

20.
It is shown that a convolutional code can be decoded with a sliding-block decoder, a time-invariant nonlinear digital filter with finite memory and delay. If the memory and delay are small, then the sliding-block decoder can be implemented as a table lookup procedure in ROM, resulting in a low cost, high speed, and high reliability decoder.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号