首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 765 毫秒
1.
该文提出两种低复杂度的基于符号翻转的多元低密度奇偶校验码(LDPC)译码算法:改进型多元加权译码算法(Iwtd-AlgB)和基于截断型预测机制的符号翻转(TD-SFDP)算法。Iwtd-AlgB算法利用外信息频率和距离系数的简单求和取代了迭代过程中的乘性运算操作;TD-SFDP算法结合外信息频率和翻转函数特性,对译码节点和有限域符号进行截断与划分,使得只有满足条件的节点和符号参与运算与翻转预测。仿真和数值结果显示,该文提出的两种算法在性能损失可控的前提下,可减少每次迭代的运算操作数,实现性能和复杂度之间的折中。  相似文献   

2.
该文提出两种低复杂度的基于符号翻转的多元低密度奇偶校验码(LDPC)译码算法:改进型多元加权译码算法(Iwtd-AlgB)和基于截断型预测机制的符号翻转(TD-SFDP)算法.Iwtd-AlgB算法利用外信息频率和距离系数的简单求和取代了迭代过程中的乘性运算操作;TD-SFDP算法结合外信息频率和翻转函数特性,对译码节点和有限域符号进行截断与划分,使得只有满足条件的节点和符号参与运算与翻转预测.仿真和数值结果显示,该文提出的两种算法在性能损失可控的前提下,可减少每次迭代的运算操作数,实现性能和复杂度之间的折中.  相似文献   

3.
张用宇 《通信技术》2015,48(11):1222-1227
提出了一种低复杂度基于翻转规则的多进制低密度奇偶校验(Low-Density Parity-Check ,LDPC)码符号翻转译码算法。为寻求有效码字,该算法在符号向量空间迭代地更新硬判决的接收符号向量。每一次迭代只改变一个符号,其符号翻转函数综合考虑了不满足校验式的个数和接收比特和计算出符号的可靠性度量。在高阶伽罗华域中采用一种无限环路规避和翻转符号选取方法,同时提出了翻转规则设计方法,该设计决定了计算复杂度和差错性能。仿真结果表明,该符号翻转算法在帧长为150符号的16进制LDPC码中取得了纠错性能和计算复杂度的有效权衡。  相似文献   

4.
刘原华  张美玲 《电讯技术》2012,52(4):488-491
为提高低密度奇偶校验(LDPC)码的低复杂度硬判决译码算法的性能,提出了一种改进的比特翻转(BF)译码算法,在迭代时利用一个交替的门限模式对多个比特进行翻转,降低了每次迭代时比特被错误翻转的概率,从而有效提高了译码性能.仿真结果表明,与BF算法相比,该算法在保持低复杂度的基础上获得了更好的译码性能和更快的收敛速度.  相似文献   

5.
为了提高多元低密度奇偶校验(LDPC, low density parity-check)码符号翻转译码算法的性能并降低译码的复杂度,提出了基于平均概率和停止准则的多元LDPC码加权符号翻转译码(APSCWSF, average probability and stopping crite-rion weighted symbol flipping)算法。该算法将校验节点邻接符号节点的平均概率信息作为权重,使翻转函数更加有效,提高符号的翻转效率,进而改善译码性能。并且通过设置迭代停止准则进一步加快算法的收敛速度。仿真结果显示,在加性高斯白噪声信道下,误符号率为10?5时,相比WSF算法、NSCWSF算法(Osc=10)和NSCWSF算法(Osc=6),APSCWSF算法(Osc=10)分别获得约0.68 dB、0.83 dB和0.96 dB的增益。同时,APSCWSF算法(Osc=6)的平均迭代次数也分别降低78.60% ~79.32%、74.89% ~ 75.95% 和67.20% ~70.80%。  相似文献   

6.
LDPC码的硬判决译码通常是利用比特翻转算法(BF)以及在其基础上改进的加权比特翻转算法(WBF)来实现的,但是前者算法性能较差,而后者的复杂度较高,为了让译码算法能够兼顾其性能和复杂度,针对之前的BF以及WBF算法,提出了一种改进的LDPC码硬判决译码算法,该算法能够在前两次迭代中完成多个比特位的翻转.仿真结果表明,这种改进的算法可以在性能损失较小的条件下,大大降低算法的复杂度,从而提高译码的效率,减轻硬件的负担.  相似文献   

7.
朱方强  王中训  刘丽  王娟 《电视技术》2011,35(13):79-82
提出一种基于循环检测的低密度奇偶校验码的比特翻转(BF)译码算法,采用对译码翻转比特的循环检测和对接受符号可靠性信息的软判决,使译码性能大大改善.理论分析表明,该译码运算复杂度低,仿真结果表明,改进的算法优于加权比特翻转译码LP-WBF算法约0.3 dB,误码性能改善明显.  相似文献   

8.
系统RA码的基于WBF策略的改进BP译码算法   总被引:1,自引:0,他引:1       下载免费PDF全文
刘星成  叶远生 《电子学报》2010,38(7):1541-1546
 针对重复累积(RA)码译码算法(BP算法和最小和算法)复杂度高或纠错性能下降的问题,将加权位翻转WBF的思想用于改进BP算法,提出了基于WBF策略的改进BP译码算法. 在每次迭代译码中若未能译出合法码字,则按一定规则进行位翻转操作,以期获得合法码字. 仿真结果表明,本算法能有效降低系统RA码的运算复杂度,且能保持优异的译码性能.  相似文献   

9.
准循环LDPC码的两种典型快速译码算法研究   总被引:1,自引:0,他引:1  
该文从译码速率、硬件实现的复杂度和误码率3个方面对比研究了两种典型的高速译码算法:Turbo型和积算法与并行加权比特翻转算法。以准循环LDPC码为对象,给出了Turbo型和积算法和并行加权比特翻转算法的实现时序、硬件复杂度以及误码率性能,其中,并行加权比特翻转算法的高效时序结构是首次给出的。计算机仿真结果表明,这两种算法都能够在迭代次数较少时取得良好的性能。  相似文献   

10.
一种改进的LDPC码多比特翻转译码算法   总被引:3,自引:0,他引:3  
低密度奇偶校验码(LDPC)的比特翻转译码算法(BF)复杂度低,实用性强。在研究了简单比特翻转法(BF)、加权比特翻转法(WBF)和可靠性比率加权比特翻转法(RRWBF)后,提出一种综合考虑符号绝对值和可靠性比率的BF译码算法。该算法每次可迭代翻转多个比特,仿真结果表明,与RRWBF算法相比,改进算法在信噪比为5dB时,误码率由10^-3数量级提高到10^-4。  相似文献   

11.
Optimum soft decoding of sources compressed with variable length codes and quasi-arithmetic codes, transmitted over noisy channels, can be performed on a bit/symbol trellis. However, the number of states of the trellis is a quadratic function of the sequence length leading to a decoding complexity which is not tractable for practical applications. The decoding complexity can be significantly reduced by using an aggregated state model, while still achieving close to optimum performance in terms of bit error rate and frame error rate. However, symbol a posteriori probabilities can not be directly derived on these models and the symbol error rate (SER) may not be minimized. This paper describes a two-step decoding algorithm that achieves close to optimal decoding performance in terms of SER on aggregated state models. A performance and complexity analysis of the proposed algorithm is given.  相似文献   

12.
许可  万建伟  王玲 《信号处理》2010,26(8):1217-1221
在加性高斯白噪声(AWGN)信道下,采用最大后验概率(MAP)算法的Turbo码解码是误比特率最低的算法。为了降低运算量实现快速解码,Log-MAP算法、Max-Log-Map算法和线性Max-Log-Map算法分别对MAP算法进行了不同程度的简化。本文简单介绍了基于MAP算法的Turbo码解码原理,从纠正函数的角度出发归纳和比较了三种MAP类简化算法,通过纠正函数从理论上对算法性能以及对信噪比估计误差的敏感度进行了分析,对分析结果进行了仿真验证。综合解码性能和运算量,提出了Turbo码解码的算法选择方案,以及实用,简易的Turbo码解码参数设置建议。   相似文献   

13.
The paper presents a computationally efficient hybrid reliability-based decoding algorithm for Reed-Solomon (RS) codes. This hybrid decoding algorithm consists of two major components, a re-encoding process and a successive erasure-and-error decoding process for both bit and symbol levels. The re-encoding process is to generate a sequence of candidate codewords based on the information provided by the codeword decoded by an algebraic decoder and a set of test error patterns. Two criteria are used for testing in the decoding process to reduce the decoding computational complexity. The first criterion is devised to reduce the number of re-encoding operations by eliminating the unlikely error patterns. The second criterion is to test the optimality of a generated candidate codeword. Numerical results show that the proposed decoding algorithm can achieve either a near-optimum error performance or an asymptotically optimum error performance.  相似文献   

14.
A multiuser detection strategy for coherent demodulation in an asynchronous code-division multiple-access system is proposed and analyzed. The resulting detectors process the sufficient statistics by means of a multistage algorithm based on a scheme for annihilating successive multiple-access interference. An efficient real-time implementation of the multistage algorithm with a fixed decoding delay is obtained and shown to require a computational complexity per symbol which is linear in the number of users K. Hence, the multistage detector contrasts with the optimum demodulator, which is based on a dynamic programming algorithm, has a variable decoding delay, and has a software complexity per symbol that is exponential in K. An exact expression is obtained and used to compute the probability of error is obtained for the two-stage detector, showing that the two-stage receiver is particularly well suited for near-far situations, approaching performance of single-user communications as the interfering signals become stronger. The near-far problem is therefore alleviated. Significant performance gains over the conventional receiver are obtained even for relatively high-bandwidth-efficiency situations  相似文献   

15.
李颖  王欣  魏急波 《通信学报》2007,28(4):87-94
基于连续衰落信道假设,将一种具有递推形式的近似最大似然(ML)度量嵌入自动球形译码算法中,提出了多符号差分近似自动球形译码(MSDAASD)。该算法适用于一般酉空时星座,克服了准静态信道假设下多符号差分球形译码(MSDSD)的错误平层现象,具有接近ML检测的性能,其平均复杂度在大多数情况下低于相同假设下的判决反馈检测算法。  相似文献   

16.
In this paper, a novel dual-metric, the maximum and minimum Squared Euclidean Distance Increment (SEDI) Brought by changing the hard decision symbol, is introduced to measure the reliability of the received M-ary Phase Shift Keying (MPSK) symbols over a Rayleigh fading channel. Based on the dual-metric, a Chase-type soft decoding algorithm, which is called erased-Chase algorithm, is developed for Reed-Solomon (RS) coded MPSK schemes. The proposed algorithm treats the unreliable symbols with small maximum SEDI as erasures, and tests the non-erased unreliable symbols with small minimum SEDI as the Chase-2 algorithm does. By introducing optimality test into the decoding procedure, much more reduction in the decoding complexity can be achieved. Simulation results of the RS(63, 42, 22)-coded 8-PSK scheme over a Rayleigh fading channel show that the proposed algorithm provides a very efficient tradeoff between the decoding complexity and the error performance. Finally, an adaptive scheme for the number of erasures is introduced into the decoding algorithm.  相似文献   

17.
A new decoding algorithm for geometrically uniform trellis codes is presented. The group structure of the codes is exploited in order to improve the decoding process. Analytical bounds to the algorithm performance and to its computational complexity are derived. The algorithm complexity does not depend on the number of states of the trellis describing the code. Extensive simulations yield results on the algorithm performance and complexity, and permit a comparison with the Viterbi algorithm and the sequential Fano algorithm  相似文献   

18.
Two Bit-Flipping Decoding Algorithms for Low-Density Parity-Check Codes   总被引:1,自引:0,他引:1  
In this letter, a low complexity decoding algorithm for binary linear block codes is applied to low-density paritycheck (LDPC) codes and improvements are described, namely an extension to soft-decision decoding and a loop detection mechanism. For soft decoding, only one real-valued addition per code symbol is needed, while the remaining operations are only binary as in the hard decision case. The decoding performance is considerably increased by the loop detection. Simulation results are used to compare the performance with other known decoding strategies for LDPC codes, with the result that the presented algorithms offer excellent performances at smaller complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号