首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
为了提高短低密度校验码(Short LDPC)的纠错性能,在研究盒匹配译码算法(BMA)和置信度与分阶统计译码级联算法(BP-OSD)的基础上,提出了一种新的针对短LDPC码译码的BP-BMA级联算法.该算法充分利用了BMA算法具有较低译码复杂度的特性.然后,利用该算法结合对数似然比累积(ALLR)算法进行了计算机仿真.仿真结果表明:BP-BMA级联算法与BP-OSD相比,译码性能有所提高,且译码复杂度大大降低了,在译码性能和复杂度间取得了很好的折中.  相似文献   

2.
为了减少在低信噪比区的平均迭代次数和削弱LLR(Logarithm Likelihood Ratio, LLR)值的振荡,分析了中短码长LDPC码错误帧对应校验节点对数似然比及校验和变化的规律,提出了一种基于消息振荡及校验更新的改进BP译码算法。该算法通过提前结束迭代译码的准则来减少在低信噪比区的平均迭代次数,并通过修正校验节点的更新来削弱LLR值的振荡来提高译码性能。仿真结果表明,相对于BP算法:在低信噪比区,该算法减少了平均迭代次数且译码性能没有损失;而在中高信噪比区,其提高了译码性能而平均迭代次数无需增加。  相似文献   

3.
针对LDPC码目前广泛使用的译码算法精度低的问题,在LLR-SPA译码算法的基础上提出了一种提高译码精度的新算法。该算法将LLR-SPA算法中复杂度较高的雅可比修正项采用泰勒级数进行分段线性近似,实验表明,该算法在译码复杂度基本不变的情况下,可以大幅度地提高其译码精度。  相似文献   

4.
在IEEE802.16e通信标准的LDPC码背景下,基于LDPC码的软判决LLR BP译码算法,结合LDPC码的最小和处理方式和硬判决译码思想,针对译码性能和复杂程度提出了一种改进的BP译码算法。在相同信噪比条件下,新BP算法在译码性能上非常接近LLR BP算法,同时其复杂程度却远小于LLR BP算法,提高了工程可实现性。  相似文献   

5.
6.
对空间数据系统委员会(CCSDS)推荐的QC-LDPC码进行了研究,给出了改进的分层译码算法.基于改进的分层译码算法设计部分并行结构QC-LDPC译码器,译码速率较快,适合应用需求,并通过仿真验证所设计的译码器的性能.  相似文献   

7.
为了弥补UMP BP-Based译码算法相对于LLR BP译码算法的性能缺陷,提出一种改进型UMP BP-Based译码算法.该算法中的参数是在最小均方误差准则下确定的,对所有的LDPC码的译码具有通用性.仿真结果表明,在相同误码率的情况下,改进型UMP BP-Based译码算法比UMP BP-Based译码算法、Normalized BP-Based译码算法以及Offset BP-Based译码算法具有更好的LDPC译码性能.  相似文献   

8.
Mackay-Neal算法是基于LDPC码的BP译码简化算法,但仍存在大量乘法运算.为了降低译码算法的运算量,基于Mackay-Neal算法提出一种改进的对数和积译码算法.最后通过计算量复杂度分析结果表明,改进后的对数和积译码算法更简单,运算量大大降低,易于硬件的实现.  相似文献   

9.
Log-MAP算法是Turbo码译码算法的一种简化算法,这类算法仍具有译码复杂度高,译码时延大的缺点.针对这一问题,提出了一种简化的对数最大后验概率译码算法.该算法基于逼近理论,用分段式最佳平方逼近多项式近似计算校正函数.仿真结果表明,简化算法具有低复杂度、译码时延少的优点,且译码性能与标准Log-MAP算法相近,较适合在实际工程中使用.  相似文献   

10.
侯宁 《计算机工程》2011,37(9):276-278,281
短低密度校验(LDPC)码的Tanner图中通常存在环路,变量节点之间的信息不再相互独立,导致LLR BP算法译码性能的下降。针对上述问题,提出一种改进型LLR BP译码算法,推导出有环时变量节点的真实信息,利用最小均方误差准则计算出有记忆的变量节点信息的权值,通过调整变量节点信息的迭代过程降低变量节点之间信息的相关性。仿真结果表明,改进型LLR BP算法具有比LLR BP算法、归一化BP算法及偏移量BP算法更好的LDPC译码性能。  相似文献   

11.
This paper is concerned with constructions of nonbinary low-density parity-check(LDPC)codes for adaptive coded modulations(ACM).A new class of efciently encodable structured nonbinary LDPC codes are proposed.The defining parity-check matrices are composed of scalar circulant sub-matrices which greatly reduce the storage requirement when compared with random LDPC codes.With this special structure of paritycheck matrix,an efcient encoding algorithm is presented.Based on the proposed codes,a family of variablerate/variable-field nonbinary LDPC codes is designed for the ACM system.When combined with matched-size signal constellations,the family of constructed codes can achieve a wide range of spectral efciency.Furthermore,the resultant ACM system can be implemented via a set of encoder and decoder.Simulation results show that the proposed nonbinary LDPC codes for the ACM system perform well.  相似文献   

12.
在研究几种加权比特翻转算法的基础上;提出了一种新的针对LDPC码的改进加权比特翻转算法。加权比特翻转(WBF)算法中的错误度量考虑了校验节点的可信度信息;在此基础上;相关的改进WBF(IWBF)算法考虑了消息本身对符号判决的影响;进一步提高了性能。但是在IWBF算法中;必须通过仿真;才能获得使译码性能较优的符号可信度加权参数。提出了一种同时考虑符号可信度和校验可信度的算法;不需要调整加权参数;即可获得较优性能。仿真显示提出的加权比特翻转算法是可行且有效的。  相似文献   

13.
怀钰  戴逸民 《计算机仿真》2010,27(5):309-313
针对在结构化LDPC码译码器中使用流水线结构,对最小和分层译码算法进行了分析。为进一步提高译码器的性能,提出了一种修正分层最小算法,使得结构化LDPC码的译码器能使用流水线结构来增加系统吞吐量。根据修正算法,设计了一种低复杂度的译码器结构,并详细描述了串行校验节点处理器和灵活置换器这两个模块的设计。分析了流水线译码器对处理时延的提高,并仿真了同一码长不同译码算法的性能。仿真结果表明修正算法和最小和译码算法相比,性能上几乎没有损失,由于译码器采用了流水线结构,吞吐量提高了2到3倍,并能灵活的支持各种码长和码率的结构化LDPC码。  相似文献   

14.
为了达到研究LDPC码在通信工程使用中性能优劣的目的,采用数学建模、模拟仿真、算法分析的方法,通过MatLab软件仿真实验,获得Gallager等常见的三种编码方式在硬判决条件、和积、BP对数域三种译码条件下,在误码率等方面的系列结果,并得出结论。  相似文献   

15.
介绍了分布式信源编码的基本原理和实现方法,说明了RA码的编译码原理,然后重点将RA码应用于分布式信源编码中并进行了仿真,在BSC信道下比较了与普通的基于LDPC的分布式信源编码的性能。仿真结果证明,使用基于RA码的分布式信源编码性能上略差于基于LDPC的方法,但编码复杂度却由原来的On2)降为On)。因此基于RA码的分布式信源编码器更适用于实时业务和需要低功耗的场合,并且能降低成本,应用前景乐观。  相似文献   

16.
改进了基于多层低密度一致校验码(Multi-level LDPC)的分布式视频解码方法.逐层进行迭代译码,低层向高层传递译码软信息(似然比),迭代译码的内信息通过计算旁信息与码字之间欧氏距获得.相比传统的多级解码方式,提出的解码方式可以减少逐层硬判决带来的信息丢失,从而减小低层向高层的错误传播,更适合于视频数据的Syndrome编码.实验结果表明,该解码方式能提高分布式视频中多层码的解码成功率,从而提高编码效率.  相似文献   

17.
在加性高斯白噪信道条件下,采用置信度传播算法对LDPC码进行译码,需要精确估计信道信噪比用于计算接收比特的后验概率消息作为译码器的输入.信噪比值的错误估计称为信噪比失配.本文研究加性高斯白噪信道条件下信噪比失配对LDPC码译码的影响.通过对置信度传播算法校验节点更新方程的近似得到一个以信噪比为自变量的校正因子函数,基于...  相似文献   

18.
    
Low density parity check codes (LDPC) exhibit near capacity performance in terms of error correction. Large hardware costs, limited flexibility in terms of code length/code rate and considerable power consumption limit the use of belief-propagation algorithm based LDPC decoders in area and energy sensitive mobile environment. Serial bit flipping algorithms offer a trade-off between resource utilization and error correction performance at the expense of increased number of decoding iterations required for convergence. Parallel weighted bit flipping decoding and its variants aim at reducing the decoding iteration and time by flipping the potential erroneous bits in parallel. However, in most of the existing parallel decoding methods, the flipping threshold requires complex computations.In this paper, Hybrid Weighted Bit Flipping (HWBF) decoding is proposed to allow multiple bit flipping in each decoding iteration. To compute the number of bits that can be flipped in parallel, a criterion for determining the relationship between the erroneous bits in received code word is proposed. Using the proposed relation the proposed scheme can detect and correct a maximum of 3 erreneous hard decision bits in an iteration. The simulation results show that as compared to existing serial bit flipping decoding methods, the number of iterations required for convergence is reduced by 45% and the decoding time is reduced by 40%, by the use of proposed HWBF decoding. As compared to existing parallel bit flipping decoding methods, the proposed HWBF decoding can achieve similar bit error rate (BER) with same number of iterations and lesser computational complexity. Due to reduced number of decoding iterations, less computational complexity and reduced decoding time, the proposed HWBF decoding can be useful in energy sensitive mobile platforms.  相似文献   

19.
We study the multicast stream authentication problem when an opponent can drop, reorder and introduce data packets into the communication channel. In such a model, packet overhead and computing efficiency are two parameters to be taken into account when designing a multicast stream protocol. In this paper, we propose to use two families of erasure codes to deal with this problem, namely, rateless codes and maximum distance separable codes. Our constructions will have the following advantages. First, our packet overhead will be small. Second, the number of signature verifications to be performed at the receiver is O(1). Third, every receiver will be able to recover all the original data packets emitted by the sender despite losses and injection occurred during the transmission of information. This work was supported by the Australian Research Council under ARC Discovery Projects DP0558773, DP0665035 and DP0663452. This work was supported in part by the National Natural Science Foundation of China Grant 60553001 and the National Basic Research Program of China Grant 2007CB807900, 2007CB807901. Christophe Tartary did most of this work while at Macquarie University where his research was supported by an iMURS scholarship. The research of Huaxiong Wang is partially supported by the Ministry of Education of Singapore under grant T206B2204. This paper is the extended version of the articles [65,64] appearing in the proceedings of IWSEC 2006 and CANS 2006.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号