首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The problem of low complexity linear programming (LP) decoding of low-density parity-check (LDPC) codes is considered. An iterative algorithm, similar to min-sum and belief propagation, for efficient approximate solution of this problem was proposed by Vontobel and Koetter. In this paper, the convergence rate and computational complexity of this algorithm are studied using a scheduling scheme that we propose. In particular, we are interested in obtaining a feasible vector in the LP decoding problem that is close to optimal in the following sense. The distance, normalized by the block length, between the minimum and the objective function value of this approximate solution can be made arbitrarily small. It is shown that such a feasible vector can be obtained with a computational complexity which scales linearly with the block length. Combined with previous results that have shown that the LP decoder can correct some fixed fraction of errors we conclude that this error correction can be achieved with linear computational complexity. This is achieved by first applying the iterative LP decoder that decodes the correct transmitted codeword up to an arbitrarily small fraction of erroneous bits, and then correcting the remaining errors using some standard method. These conclusions are also extended to generalized LDPC codes.   相似文献   

2.
Message-passing iterative decoders for low-density parity-check (LDPC) block codes are known to be subject to decoding failures due to so-called pseudocodewords. These failures can cause the large signal-to-noise ratio (SNR) performance of message-passing iterative decoding to be worse than that predicted by the maximum-likelihood (ML) decoding union bound.   相似文献   

3.
Efficient implementations of the sum-product algorithm (SPA) for decoding low-density parity-check (LDPC) codes using difference-based messages between bit nodes and check nodes are presented. As for the updates of check nodes, reduced- complexity derivatives are also put forward. As compared with the traditional Log-Likelihood-Ratio(LLR)-based decoding implementations, the proposed method has much lower complexity and latency, while it has no obvious loss of the error performance.  相似文献   

4.
张培 《通信技术》2010,43(1):43-44,47
结合低密度奇偶校验码(LDPC)的译码算法和最新的现场可编程门阵列(FPGA)技术,提出了一种对低密度奇偶校验码的最小和算法(MsA)进行C语言现场可编程门阵列编程实现的新方案。基于Xilinx公司的Virtex2系列芯片XC2V2000,设计实现了一种码长为250,码率为0.5的(3,6)低密度奇偶校验码译码器,并给出了寄存器传输级(RTL)协同仿真系统结构,证实了低密度奇偶校验码具有良好的纠错性能,为软件工程师开发基于现场可编程门阵列的嵌入式系统提供了新的思路。  相似文献   

5.
张用宇 《通信技术》2015,48(11):1222-1227
提出了一种低复杂度基于翻转规则的多进制低密度奇偶校验(Low-Density Parity-Check ,LDPC)码符号翻转译码算法。为寻求有效码字,该算法在符号向量空间迭代地更新硬判决的接收符号向量。每一次迭代只改变一个符号,其符号翻转函数综合考虑了不满足校验式的个数和接收比特和计算出符号的可靠性度量。在高阶伽罗华域中采用一种无限环路规避和翻转符号选取方法,同时提出了翻转规则设计方法,该设计决定了计算复杂度和差错性能。仿真结果表明,该符号翻转算法在帧长为150符号的16进制LDPC码中取得了纠错性能和计算复杂度的有效权衡。  相似文献   

6.

针对长码长空间耦合低密度奇偶校验(SC-LDPC)码译码时延较长的问题,该文提出了分层滑动窗译码(LSWD)算法。该算法利用SC-LDPC子码码块的准循环特性和滑动窗内校验矩阵的层次结构,通过在滑动窗内对校验矩阵进行分层处理,优化层与层之间消息传递,从而加快窗内译码的收敛速度,减少了译码迭代次数。仿真和分析结果表明:在相同的信噪比(SNR)条件和相同的误码性能要求下,LSWD算法所需的迭代次数少于滑动窗译码(SWD)算法,特别在高信噪比下,LSWD算法的迭代次数约为SWD算法的一半,从而有效缩短全局译码时延;在相同译码迭代次数下,LSWD算法的译码性能优于SWD算法,而其计算复杂度增加不大。

  相似文献   

7.
朱嘉  张海滨  潘宇 《电讯技术》2006,46(5):94-97
在LDPC码的译码算法中,和积算法性能最优但复杂性较高,最小和算法实现简单但性能与和积算法相差较多。针对这一性能与复杂度的矛盾,带有修正项的最小和算法成为研究的热点问题。文中基于一种性能与和积算法接近的修正最小和算法进行研究,对修正项的修正方式进行了简化,简化后的算法在性能上与和积算法仍非常接近,实现复杂度却比原修正最小和算法有明显的降低。  相似文献   

8.
张誉  雷菁  文磊 《通信技术》2011,44(5):21-23
多进制LDPC码是将二进制LDPC码推广到有限域GF(q),其校验矩阵的元素不再是(0,1),而是集合(0,1,…,q-1),译码仍然采用高效的基于置信度传播的迭代译码算法。这里主要推导了多进制译码算法的迭代公式,分析证明了基于快速傅里叶变换(FFT)理论的改进算法,最后通过仿真手段验证和分析了基于FFT的多进制译码算法的优越性能。  相似文献   

9.
Turbo码的一种高效改进型MAP译码算法   总被引:1,自引:0,他引:1  
该文给出了一种改进型最大后验概率(MAP)译码算法用于实现并行级联卷积码(Turbo码)的最优译码。与基于对数域的Log-MAP算法相比较,该文给出的算法不引入对数域,但能够完全消除标准MAP算法在迭代过程中必须进行的大量指数和对数运算。计算机仿真结果表明,这种具有最优纠错性能的改进型MAP算法能够显著减少运行时间,其译码效率甚至优于牺牲了较多纠错性能的最快速的对数域MAP译码算法(Max-Log-MAP)。  相似文献   

10.
The layered decoding algorithm has been widely used in the implementation of Low Density Parity Check (LDPC) decoders, due to its high convergence speed. However, the pipeline operation of the layered decoder may introduce memory access conflicts, which heavily deteriorates the decoder throughput. To essentially deal with the issue of memory access conflicts, we propose a construction algorithm of LDPC codes, to which a constraint condition is added in the Progressive Edge-Growth (PEG) algorithm. The constraint condition can guarantee that for our constructed LDPC codes, the sets of all the variable nodes connected to the consecutive layers do not share any common variable node, which can avoid the memory access conflicts. Simulation results show that the performance of our constructed LDPC codes is close to the several other LDPC codes adopted in wireless standards. Moreover, compared with the decoder for IEEE 802. 16e LDPC codes, the throughput of our LDPC decoder has large improvement, while the chip resource consumption is unchanged. Thus, our constructed LD-PC codes can be adopted in the high-speed transmission.  相似文献   

11.
一种应用于不可分层LDPC码的并行分层译码算法   总被引:1,自引:1,他引:0  
该文针对不可分层LDPC码无法利用分层算法进行译码的问题,提出了一种并行分层置信度传播(Parallel-Layered Belief-Propagation,PLBP)译码算法。与传统分层算法不同,该算法在译码时并行进行各层更新,串行进行层内各行更新。这种译码机制使得同一变量节点在各层内不同时进行更新,从而实现各变量节点在一次迭代中分层递进更新的算法目标。仿真表明,在不增加译码复杂度的情况下,该文提出的PLBP算法与传统的洪水算法相比,误码性能更优,而且所需要的平均迭代次数降低了约50%。此外,PLBP算法采用了合并的节点更新运算,最终使该算法达到的译码速度约为洪水算法的4倍。  相似文献   

12.
LDPC码加权比特翻转译码算法研究   总被引:2,自引:0,他引:2  
近年来,基于置信传播(BP),最小和(MS)和归一化最小和(NMS)算法,已经提出3种相对应的LDPC码加权比特翻转(WBF)译码算法。但这3种WBF算法所代表的物理意义和内在的紧密联系问题目前仍未有所研究。该文依据一种全新的理解方式,对3种WBF算法进行理论推导,并阐述3种算法内在的紧密联系,最后通过仿真验证所得结论的合理性和正确性。这对于设计新的改进型WBF算法具有一定的指导意义。  相似文献   

13.
在多元LDPC码的软判决译码算法中,迭代过程中没有使用判决结果和校验和中隐藏的一些信息,在判决结果中隐藏着稳定性信息,校验和中隐藏着变量节点的可靠度信息。从混合译码算法思路出发,借鉴硬判决译码算法中统计校验和的做法和联合迭代检测译码算法中的反馈调整思想,对FFT-BP译码算法进行了改进。改进算法利用迭代过程中的可靠性和稳定度信息,对由变量节点向校验节点传递的消息向量进行调整以使其提供更多正确信息。仿真结果表明,改进的译码算法在没有增加复杂度的前提下,提升了FFT-BP译码算法的性能,在不同参数设置下,性能改进在0.2 d B左右。  相似文献   

14.
彭伟夫  张高远  文红  王龙业 《电视技术》2015,39(13):129-134
仿真结果表明,对于列重和行重较小的低密度奇偶校验(Low Density Parity-check,LDPC)码而言,梯度下降比特翻转(Gradient Descent Bit-flipping,GDBF)译码算法展现出巨大的性能优势,但其对于基于有限域几何构造的列重和行重较大的LDPC码则性能损失严重.该文首先分析指出,对于大列重LDPC码而言,翻转函数中的“互相关项”和“双极性校验子求和项”之间的“不匹配”是造成性能损失的主要原因.其次,引入一种可靠性度量对双极性校验子进行加权,上述“不匹配”现象得到有效削弱,从而改善GDBF算法对大列重LDPC码的译码性能.仿真结果表明,在加性高斯白噪声信道下,相比于传统的GDBF算法,新提出的算法在误比特率为10-5时可获得0.8 dB的增益.  相似文献   

15.
Zhao  Ming  Xu  Bin 《Wireless Personal Communications》2019,107(4):1823-1833
Wireless Personal Communications - Two main decoding algorithms are usually used for low-density parity-check convolutional (LDPC-C) codes, belief propagation algorithm and on-demand variable node...  相似文献   

16.
袁建国  仝青振  黄胜  王永 《半导体光电》2013,34(4):642-644,648
在高斯白噪声(AWGN)信道情况下,针对LDPC码的译码算法进行深入分析后,对适用于低密度奇偶校验(LDPC)码的硬判决译码算法与软判决译码算法进行了仿真与对比分析,并通过引入乘性校正因子以降低软判决算法中对数域置信传播(LLR-BP)算法的变量消息相关性。仿真分析表明改进后的LLR-BP算法与原算法相比,在几乎不增加计算复杂度的情况下,其译码纠错性能得到了明显的改善。因而改进后的LLR-BP算法具有明显的优越性。  相似文献   

17.
一种基于分层译码和Min-max的多进制LDPC码译码算法   总被引:1,自引:0,他引:1  
杨威  张为 《电子与信息学报》2013,35(7):1677-1681
该文在现有译码算法的基础上提出一种高效的非二进制低密度奇偶校验码(NB-LDPC)译码方法,充分利用了分层译码算法与Min-max算法的优点,不但译码复杂度低、需要的存储空间小,而且可将译码速度提高一倍。应用该算法,对一种定义在GF(25)上的(620,509)码进行了仿真。该码的仿真结果表明:在相同误码率下,该文译码算法所需最大迭代次数仅为Zhang的算法(2011)的45%。  相似文献   

18.
一种新的终止LDPC迭代译码算法   总被引:1,自引:1,他引:0  
在传统的卫星广播系统中,信道纠错通常采用BCH码级联LDPC码的方案以达到良好的误码率性能,例如DVB-S2系统。作为内码的LDPC码通常采用迭代译码,且迭代次数较高才能实现比较好的系统性能。借助BCH级联LDPC的结构,文中提出了将BCH检错嵌套进LDPC每一次迭代译码过程中的新的迭代译码结构。仿真结果表明,新算法以较低的BCH码检错运算复杂度换取了LDPC码迭代次数的明显下降,从而极大降低了迭代译码总体复杂度和译码时延,且整体纠错性能与原始LDPC译码后BCH纠错的算法相比基本保持不变。  相似文献   

19.
In this paper, we propose a new linear programming formulation for the decoding of general linear block codes. Different from the original formulation given by Feldman, the number of total variables to characterize a parity-check constraint in our formulation is less than twice the degree of the corresponding check node. The equivalence between our new formulation and the original formulation is proven. The new formulation facilitates to characterize the structure of linear block codes, and leads to new decoding algorithms. In particular, we show that any fundamental polytope is simply the intersection of a group of the so-called minimum polytopes, and this simplified formulation allows us to formulate the problem of calculating the minimum Hamming distance of any linear block code as a simple linear integer programming problem with much less auxiliary variables. We then propose a branch-and-bound method to compute a lower bound to the minimum distance of any linear code by solving a corresponding linear integer programming problem. In addition, we prove that, for the family of single parity-check (SPC) product codes, the fractional distance and the pseudodistance are both equal to the minimum distance. Finally, we propose an efficient algorithm for decoding SPC product codes with low complexity and maximum-likelihood (ML) decoding performance.   相似文献   

20.
Reduced-Complexity Decoding of LDPC Codes Various log-likelihood-ratio-based belief-propagation (LLR- BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representation of the decoding computations are shown to achieve a reduction in complexity, by simplifying the check-node update or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from a performance, latency, computational complexity, and memory-requirement perspective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号