共查询到20条相似文献,搜索用时 15 毫秒
1.
相比于最小和译码算法,LDPC码的另外一种译码算法——比特翻转译码算法实现更简单,但其性能有较大恶化。最近提出的有噪梯度下降比特翻转译码(NGDBF)算法性能相比简单的比特翻转算法性能有明显提高,但该算法一次翻转一个比特限制了其应用。结合并行加权比特翻转译码(PWBF)中翻转标记的思想,本文提出了一种NGDBF译码的改进算法——并行NGDBF译码及其自适应形式,克服了PWBF译码对行重/列重较小的LDPC码性能不佳的缺陷。仿真表明:并行NGDBF译码的性能优于相应的NGDBF译码,其自适应形式不仅性能逼近最小和译码,而且实现简单。 相似文献
2.
3.
An initial bootstrap step for the decoding of low-density parity-check (LDPC) codes is proposed. Decoding is initiated by first erasing a number of less reliable bits. New values and reliabilities are then assigned to erasure bits by passing messages from nonerasure bits through the reliable check equations. The bootstrap step is applied to the weighted bit-flipping algorithm to decode a number of LDPC codes. Large improvements in both performance and complexity are observed. 相似文献
4.
LDPC码加权位翻转解码算法的研究 总被引:1,自引:1,他引:0
本文以Tanner图上的迭代消息流传递技术为基础,分析了Gallager提出的LDPC码第一解码方案,给出基于校验和的位翻转硬判决解码算法。在此基础上引入接收信号作为可靠性评估,使评估值作为硬判决的加权系数,从而提出基于校验和的加权位翻转解码算法。加权位翻转算法充分考虑了接收符号的信息;为了快速搜索翻转位,对不满足的校验方程数采用最大投票数排队算法。这些措施的合理应用改善了基于校验和的位翻转解码算法的性能。 相似文献
5.
6.
为提高低密度奇偶校验(LDPC)码的低复杂度硬判决译码算法的性能,提出了一种改进的比特翻转(BF)译码算法,在迭代时利用一个交替的门限模式对多个比特进行翻转,降低了每次迭代时比特被错误翻转的概率,从而有效提高了译码性能.仿真结果表明,与BF算法相比,该算法在保持低复杂度的基础上获得了更好的译码性能和更快的收敛速度. 相似文献
7.
8.
基于自身可信度的低复杂度LDPC码位翻转解码算法 总被引:2,自引:2,他引:0
提出一种基于位翻转的低复杂度、便于硬件实现的LDPC码解码算法.该算法充分利用变量节点的本征信息来计算翻转判决函数,减少了对其它变量节点软信息的需求,因此大大降低了解码硬件实现的复杂度,同时保证翻转判决函数具有较高的可靠性.利用该算法,对RS-based LDPC码进行的仿真结果表明,改进算法的解码性能接近甚至略优于IMWBF算法. 相似文献
9.
Hybrid Iterative Decoding for Low-Density Parity-Check Codes Based on Finite Geometries 总被引:1,自引:0,他引:1
Jian Li Xian-Da Zhang 《Communications Letters, IEEE》2008,12(1):29-31
In this letter, a two-stage hybrid iterative decoding algorithm which combines two iterative decoding algorithms is proposed to reduce the computational complexity of finite geometry low-density parity-check (FG-LDPC) codes. We introduce a fast weighted bit-flipping (WBF) decoding algorithm for the first stage decoding. If the first stage decoding fails, the decoding is continued by the powerful belief propagation (BP) algorithm. The proposed hybrid decoding algorithm greatly reduces the computational complexity while maintains the same performance compared to that of using the BP algorithm only. 相似文献
10.
In this letter, a modified weighted bit-flipping decoding algorithm for low-density parity-check codes is proposed. Improvement in performance is observed by considering both the check constraint messages and the intrinsic message for each bit. 相似文献
11.
基于无线信道特征的密钥生成过程中,为了降低通信双方生成的密钥不一致率常采取的措施是密钥协商。通常的密钥协商过程是在BSC信道下进行密钥协商,但是协商的效率较低。为了提高密钥协商的效率,本文提出一种在等效信道下基于LDPC编译码的协议机制。在此协议机制中,无线信道下采用Mathur[1][2]等人提出的Level-Crossing算法(LCA)提取密钥的过程构成了本协议中的等效信道,该机制对LCA提取后的密钥协商信息进行建模,推导了等效信道的最佳似然比,并据此采用LDPC码简单的加权比特翻转等译码算法[3-5]来有效进行密钥协商。将等效信道下LDPC码加权比特译码算法用于协商过程的性能和BSC信道下协商后的性能进行仿真,在SNR大于6db的前提下,仿真结果表明:1)在相同低门限的条件下,和LCA算法生成的初始密钥相比较,利用LDPC码比特翻转译码等算法在等效信道下和在BSC信道下协商后的密钥不一致率比初始密钥不一致率降低1至2个数量级;2)在等效信道下利用简单加权比特翻转译码算法进行密钥协商后的密钥不一致率比BSC信道下协商后的密钥不一致率降低大约1个数量级。 相似文献
12.
For one class of Low-Density Parity-Check(LDPC)codes with low row weight in their parity check matrix,a new Syndrome Decoding(SD)based on the heuristic Beam Search(BS),labeled as SD-BS,is put forward to improve the error performance.First,two observations are made and verified by simulation results.One is that in the SNR region of interest,the hard-decision on the corrupted sequence yields only a handful of erroneous bits.The other is that the true error pattern for the nonzero syndrome has a high probability to survive the competition in the BS,provided sufficient beam width.Bearing these two points in mind,the decoding of LDPC codes is transformed into seeking an error pattern with the known decoding syndrome.Secondly,the effectiveness of SD-BS depends closely on how to evaluate the bit reliability.Enlightened by a bit-flipping definition in the existing literature,a new metric is employed in the proposed SD-BS.The strength of SD-BS is demonstrated via applying it on the corrupted sequences directly and the decoding failures of the Belief Propagation(BP),respectively. 相似文献
13.
The problem of improving the performance of min-sum decoding of low-density parity-check(LDPC)codes is considered in this paper.Based on rain-sum algorithm,a novel modified min-sum decoding algorithm for LDPC codes is proposed.The proposed algorithm modifies the variable node message in the iteration process by averaging the new message and previous message if their signs are different.Compared with the standard min-sum algorithm,the modification is achieved with only a small increase in complexity,but significantly improves decoding performance for both regular and irregular LDPC codes.Simulation results show that the performance of our modified decoding algorithm is very close to that of the standard sum-produet algorithm for moderate length LDPC codes. 相似文献
14.
An improved decoding algorithm for finite-geometry LDPC codes 总被引:1,自引:0,他引:1
In this letter, an improved bit-flipping decoding algorithm for high-rate finite-geometry low-density parity-check (FG-LDPC) codes is proposed. Both improvement in performance and reduction in decoding delay are observed by flipping multiple bits in each iteration. Our studies show that the proposed algorithm achieves an appealing tradeoff between performance and complexity for FG-LDPC codes. 相似文献
15.
Implementation-efficient reliability ratio based weighted bit-flipping decoding for LDPC codes 总被引:1,自引:0,他引:1
It was recently shown that the reliability ratio based bit-flipping (RRWBF) decoding algorithm for low-density parity-check (LDPC) codes performs best among existing bit-flipping-based algorithms. A new version of this algorithm is proposed such that decoding time is significantly reduced, especially when the iteration number is small and the code length is large. Simulation results showed the proposed version has up to 2322.39%, 823.90%, 511.79%, and 261.92% speedup compared to the original algorithm on a UNIX workstation for 10, 30, 50, and 100 iterations. It is thus much more efficient to adopt this version for simulation and hardware implementation. Moreover, this version of the RRWBF algorithm provides a more intuitive way of interpreting its superior performance over other bit-flipping-based algorithms. 相似文献
16.
17.
Pishro-Nik H. Fekri F. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2004,50(3):439-454
This paper investigates decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC). We study the iterative and maximum-likelihood (ML) decoding of LDPC codes on this channel. We derive bounds on the ML decoding of LDPC codes on the BEC. We then present an improved decoding algorithm. The proposed algorithm has almost the same complexity as the standard iterative decoding. However, it has better performance. Simulations show that we can decrease the error rate by several orders of magnitude using the proposed algorithm. We also provide some graph-theoretic properties of different decoding algorithms of LDPC codes over the BEC which we think are useful to better understand the LDPC decoding methods, in particular, for finite-length codes. 相似文献
18.
Pusane A.E. Feltstrom A.J. Sridharan A. Lentmaier M. Zigangirov K. Costello D.J. 《Communications, IEEE Transactions on》2008,56(7):1060-1069
Potentially large storage requirements and long initial decoding delays are two practical issues related to the decoding of low-density parity-check (LDPC) convolutional codes using a continuous pipeline decoder architecture. In this paper, we propose several reduced complexity decoding strategies to lessen the storage requirements and the initial decoding delay without significant loss in performance. We also provide bit error rate comparisons of LDPC block and LDPC convolutional codes under equal processor (hardware) complexity and equal decoding delay assumptions. A partial syndrome encoder realization for LDPC convolutional codes is also proposed and analyzed. We construct terminated LDPC convolutional codes that are suitable for block transmission over a wide range of frame lengths. Simulation results show that, for terminated LDPC convolutional codes of sufficiently large memory, performance can be improved by increasing the density of the syndrome former matrix. 相似文献
19.
低密度奇偶校验(LDPC)码的译码硬件实现方案大多采用计算复杂度较低的修正最小和(NMS)算法,然而对于低码率LDPC码,由于校验节点度数低,NMS算法的修正误差较大,导致其译码性能和标准的置信传播(BP)算法相比有较大差异。该文针对基于原图构造的一类低码率LDPC码,提出了在NMS迭代译码中结合震荡抵消(OSC)处理和多系数(MF)修正技术的方案。结合低码率原型图LDPC码行重分布差异较大的特点,MF修正算法可以有效地减少计算误差,从而改善译码性能。另外低码率原型图LDPC码的收敛较慢,而OSC处理则可以较好地抑制正反馈信息,进一步提高NMS算法的性能增益。仿真结果表明,对于此类低码率LDPC码, MF-OSC-NMS算法可以达到接近BP算法的性能。OSC处理和MF修正技术硬件实现简单,与NMS算法相比几乎没有增加计算复杂度,因此MF-OSC-NMS算法是译码算法复杂度和性能之间一个较好的折中处理方案。 相似文献
20.
Daskalakis C. Dimakis A.G. Karp R.M. Wainwright M.J. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2008,54(8):3565-3578
We initiate the probabilistic analysis of linear programming (LP) decoding of low-density parity-check (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman succeeds in correcting a constant fraction of errors with high probability. The fraction of correctable errors guaranteed by our analysis surpasses previous nonasymptotic results for LDPC codes, and in particular, exceeds the best previous finite-length result on LP decoding by a factor greater than ten. This improvement stems in part from our analysis of probabilistic bit-flipping channels, as opposed to adversarial channels. At the core of our analysis is a novel combinatorial characterization of LP decoding success, based on the notion of a flow on the Tanner graph of the code. An interesting by-product of our analysis is to establish the existence of ldquoprobabilistic expansionrdquo in random bipartite graphs, in which one requires only that almost every (as opposed to every) set of a certain size expands, for sets much larger than in the classical worst case setting. 相似文献