首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 123 毫秒
1.
Turbo乘积码因其具有接近香农限的译码性能和适合高速译码的并行结构,已成为纠错编码领域的研究热点。Turbo乘积码的分量码一般由扩展汉明码构造而成,所以该类码字编码和译码的硬件实现比较简单。当Turbo乘积码采用扩展汉明码作为子码时,随着信噪比的提高,码字的最小码重对误帧率的影响会逐步增大。文中改进了Turbo乘积码编码结构,在只增加较小的编译码复杂度和时延的情况下,提高了码字的最小码重,并减少了最小码重码字在码字空间所占的比例。通过仿真和分析,比较了这种码和TPC码在误帧率性能、码字的最小码重分布以及最小码间距估计上的差异。  相似文献   

2.
郭锐  孙荷  杨沛 《电子与信息学报》2023,45(10):3594-3602
为了降低极化码快速简化串行抵消翻转(Fast-SSC-Flip)译码算法的候选翻转比特集合大小,减小搜索复杂度,该文提出一种基于关键翻转集合的极化码Fast-SSC-Flip译码算法。基于快速简化串行抵消(Fast-SSC)译码过程中首位译码错误信息比特有极大的概率落于关键集合(CS)中,以及Fast-SSC-Flip译码算法的候选比特均为码字比特,所提算法利用极化码的生成矩阵得到与CS中信息比特相应的码字比特,并用这些码字比特构建关键翻转集合(CFS)作为候选翻转比特集合。实验结果表明,在使用相同候选比特可靠性度量准则的前提下,在码长$N = 1\;024$及码率$R = 0.5$时,该文所提基于关键翻转集合的Fast-SSC-Flip译码算法相较于传统Fast-SSC-Flip算法在不损失译码性能的情况下,候选翻转集合大小显著降低;相较于新的快速简化串行抵消翻转(N-Fast-SSC-Flip)算法有相近的译码性能,但候选翻转集合至少缩小了77.93%。  相似文献   

3.
Turbo乘积码梯度译码算法研究   总被引:1,自引:1,他引:0  
Turbo乘积码(简称TPC码)是一类采用简单的行列交织器将分组码进行串行级联而构成的纠错码.文中针对二进制turbo乘积码提出了一种快速的软判决译码算法一梯度译码算法.该算法是以迭代Chase算法为基础,通过利用chase算法上次迭代译码而得到的每行(或列)最优判决码D(m-1)来代替竞争码字C,节省了寻找C的过程,从而简化了外信息和软输出的计算.仿真结果表明:梯度算法能在基本保持turbo乘积码的Chase算法译码性能基础上,提高了译码速度,降低了译码复杂度.  相似文献   

4.
周承  卫保国 《电子设计工程》2011,19(22):126-128
针对Turbo乘积码译码延时的问题,提出一种基于校验子的Turbo乘积码译码算法(S-TPC),该算法根据校验子的值采取不同方式对每行(列)进行译码,节省了一部分校验子为0的码字的硬判决译码运算量。仿真结果表明,S-TPC(32,26)在迭代4次时,能在不降低译码性能的情况下,减少近50%的计算量。  相似文献   

5.
王婷  陈为刚 《信号处理》2020,36(5):655-665
考虑多进制LDPC码的符号特性,以及对其残留错误和删除的分析,本文采用多进制LDPC码作为内码,相同Galois域下的高码率RS码作为外码来构造多进制乘积码;并提出了一种低复杂度的迭代译码方案,减少信息传输的各类错误。在译码时,只对前一次迭代中译码失败的码字执行译码,并对译码正确码字所对应的比特初始概率信息进行修正,增强下一次迭代多进制LDPC译码符号先验信息的准确性,减少内码译码后的判决错误,从而充分利用外码的纠错能力。仿真结果显示,多进制乘积码相较于二进制LDPC乘积码有较大的编码增益,并通过迭代进一步改善了性能,高效纠正了信道中的随机错误和突发删除。对于包含2%突发删除的高斯信道,在误比特率为10-6时,迭代一次有0.4 dB左右的增益。  相似文献   

6.
研究了空时分组码与LDPC码相级联的性能。为了减少译码延时,把校验阵变换为下三角校验阵进行编码, 由于校验阵和生成阵满足一定的关系,因此可以在接收码字的同时,对子码进行译码。以便接收完整个码字时,再利用子码 的信息对整个码字进行迭代译码。  相似文献   

7.
Chase算法是Turbo乘积码(TPC)软判决译码中常采用的算法之一。分析了传统Chase算法中寻找竞争码字对译码复杂度的影响,在此基础上提出了两种新的简化译码算法,省去了寻找竞争码字的过程。仿真结果表明,简化算法在基本保持传统Chase算法译码性能的基础上,降低了译码复杂度,提高了译码速度。  相似文献   

8.
极化码拥有出色的纠错性能,但编码方式决定了其码长不够灵活,需要通过凿孔构造码长可变的极化码。该文引入矩阵极化率来衡量凿孔对极化码性能的影响,选择矩阵极化率最大的码字作为最佳凿孔模式。对极化码的码字进行分段,有效减小了最佳凿孔模式的搜索运算量。由于各分段的第1个码字都会被凿除,且串行抵消译码过程中主要发生1位错,因此在各段段首级联奇偶校验码作为译码提前终止标志,检测前段码字的译码错误并进行重新译码。对所提方法在串行抵消译码下的性能进行仿真分析,结果表明,相比传统凿孔方法,所提方法在10–3误码率时能获得约0.7 dB的编码增益,有效提升了凿孔极化码的译码性能。  相似文献   

9.
赵超群  黄英  雷菁 《电视技术》2006,(10):15-17
对软输入/软输出迭代译码算法进行了理论研究,分析该算法的共性,并以Turbo 乘积码的性能仿真说明迭代译码对译码性能的影响,还以理论研究为基础,对迭代译码算法进行了硬件设计,重点探讨了Turbo 乘积码的译码算法硬件设计.  相似文献   

10.
Turbo乘积码仿真研究   总被引:2,自引:1,他引:1  
Turbo codes译码算法的核心是软输入/软输出迭代译码,把这种思想应用于乘积码的译码中,得到了Turbo乘积码(TPC)迭代译码算法。介绍了基于扩展汉明码的乘积码的迭代译码算法,时该算法在AWGN信道中的译码性能进行了仿真。  相似文献   

11.
In this letter, a stopping criterion using the error- detecting capability of linear block codes is proposed for the decoding of turbo product codes. The iterative decoding is stopped when the outputs from the Chase decoder are valid codewords for all rows and columns simultaneously. Simulation shows that the proposed method can reduce about one and half iterations compared with an existing stopping method, without noticeable BER performance loss. Some modification has also been discussed which may further reduce the decoding complexity.  相似文献   

12.
A new soft decoding algorithm for linear block codes is proposed. The decoding algorithm works with any algebraic decoder and its performance is strictly the same as that of maximum-likelihood-decoding (MLD). Since our decoding algorithm generates sets of different candidate codewords corresponding to the received sequence, its decoding complexity depends on the received sequence. We compare our decoding algorithm with Chase (1972) algorithm 2 and the Tanaka-Kakigahara (1983) algorithm in which a similar method for generating candidate codewords is used. Computer simulation results indicate, for some signal-to-noise ratios (SNR), that our decoding algorithm requires less average complexity than those of the other two algorithms, but the performance of ours is always superior to those of the other two  相似文献   

13.
This paper presents a low-complexity recursive and systematic method to construct good well-structured low-density parity-check (LDPC) codes. The method is based on a recursive application of a partial Kronecker product operation on a given gamma x q, q ges 3 a prime, integer lattice L(gamma x q). The (n - 1)- fold product of L(gamma x q) by itself, denoted Ln(gamma x q), represents a regular quasi-cyclic (QC) LDPC code, denoted (see PDF), of high rate and girth 6. The minimum distance of (see PDF) is equal to that of the core code (see PDF) introduced by L(gamma x q). The support of the minimum weight codewords in (see PDF) are characterized by the support of the same type of codewords in (see PDF). From performance perspective the constructed codes compete with the pseudorandom LDPC codes.  相似文献   

14.
A new maximum likelihood decoding (MLD) algorithm for linear block codes is proposed. The new algorithm uses the algebraic decoder in order to generate the set of candidate codewords. It uses the exact probability for each codeword as a new likelihood metric and a method to generate the appropriate set of codewords similar to Kaneko, et al., and Tanaka-Kakigahara algorithms. The performance of the proposed algorithm is the same as that of MLD as it is proved theoretically and verified by simulation results. The comparison with these similar algorithms shows that the new one always requires less average decoding complexity than those of the other algorithms. Finally, we compare the algorithms for terrestrial and satellite channels.  相似文献   

15.
Constant weight binary codes are used in a number of applications. Constructions based on mathematical structure are known for many codes. However, heuristic constructions unrelated to any mathematical structure can become of greater importance when the parameters of the code are larger. This paper considers the problem of finding constant weight codes with the maximum number of codewords from a purely algorithmic perspective. A set of heuristic and metaheuristic methods is presented and developed into a variable neighborhood search framework. The proposed method is applied to 383 previously studied cases with lengths between 29 and 63. For these cases it generates 153 new codes, with significantly increased numbers of codewords in comparison with existing constructions. For 10 of these new codes the number of codewords meets a known upper bound, and so these 10 codes are optimal. As well as the ability to generate new best codes, the approach has the advantage that it is a single method capable of addressing many sets of parameters in a uniform way.  相似文献   

16.
Immink  K.A.S. 《Electronics letters》1997,33(23):1943-1944
The author reports on the performance of a new class of constrained codes, called weakly constrained codes. These codes do not strictly guarantee the imposed channel constraints, but rather generate codewords that violate, with a given (small) probability, the prescribed constraint. Weakly constrained codes are specifically of interest when it is desirable that the code rate R=p/q is very high, requiring codewords of length q>100  相似文献   

17.
This correspondence presents performance analysis of symbol-level soft-decision decoding of q-ary maximum-distance-separable (MDS) codes based on the ordered statistics algorithm. The method we present is inspired by the one recently proposed by Agrawal and Vardy (2000), who approximately evaluate the performance of generalized minimum-distance decoding. The correspondence shows that in our context, the method allows us to compute the exact value of the probability that the transmitted codeword is not one of the candidate codewords. This leads to a close upper bound on the performance of the decoding algorithm. Application of the ordered statistics algorithm to MDS codes is not new. Nevertheless, its advantages seem not to be fully explored. We show an example where the decoding algorithm is applied to singly extended 16-ary Reed-Solomon (RS) codes in a 128-dimensional multilevel coded-modulation scheme that approaches the sphere lower bound within 0.5 dB at the word error probability of 10/sup -4/ with manageable decoding complexity.  相似文献   

18.
In order to reduce the number of redundant candidate codewords generated by the fast successive cancellation list (FSCL) decoding algorithm for polar codes, a simplified FSCL decoding algorithm based on critical sets (CS-FSCL) of polar codes is proposed. The algorithm utilizes the number of information bits belonging to the CS in the special nodes, such as Rate-1 node, repetition (REP) node and single-parity-check (SPC) node, to constrain the number of the path splitting and avoid the generation of unnecessary candidate codewords, and thus the latency and computational complexity are reduced. Besides, the algorithm only flips the bits corresponding to the smaller log-likelihood ratio (LLR) values to generate the sub-maximum likelihood (sub-ML) decoding codewords and ensure the decoding performance. Simulation results show that for polar codes with the code length of 1 024, the code rates of 1/4, 1/2 and 3/4, the proposed CS-FSCL algorithm, compared with the conventional FSCL decoding algorithm, can achieve the same decoding performance, but reduce the latency and computational complexity at different list sizes. Specifically, under the list size of L=8, the code rates of R=1/2 and R=1/4, the latency is reduced by 33% and 13% and the computational complexity is reduced by 55% and 50%, respectively.  相似文献   

19.
A Bidirectional Efficient Algorithm for Searching code Trees (BEAST) is proposed for efficient soft-output decoding of block codes and concatenated block codes. BEAST operates on trees corresponding to the minimal trellis of a block code and finds a list of the most probable codewords. The complexity of the BEAST search is significantly lower than the complexity of trellis-based algorithms, such as the Viterbi algorithm and its list generalizations. The outputs of BEAST, a list of best codewords and their metrics, are used to obtain approximate a posteriori probabilities (APPs) of the transmitted symbols, yielding a soft-input soft-output (SISO) symbol decoder referred to as the BEAST-APP decoder. This decoder is employed as a component decoder in iterative schemes for decoding of product and incomplete product codes. Its performance and convergence behavior are investigated using extrinsic information transfer (EXIT) charts and compared to existing decoding schemes. It is shown that the BEAST-APP decoder achieves performances close to the Bahl–Cocke–Jelinek–Raviv (BCJR) decoder with a substantially lower computational complexity.   相似文献   

20.
The synchronization of variable-length codes   总被引:2,自引:0,他引:2  
Many variable-length codes exhibit a tendency for resynchronization to occur automatically following any error. However, attempts to identify an underlying synchronization mechanism, and to accurately predict the expected synchronization delay, for even quite specific variable-length codes, appear to have been largely unsuccessful. The present paper explores a novel method for estimating the synchronization performance for a wide variety of variable-length codes, based on the T-Codes. T-Codes are a class of self-synchronizing codes, which typically synchronize within 2-3 codewords by a mechanism that derives from a recursive T-augmentation construction. It is observed that the T-Code mechanism for synchronization is followed, more or less, by other variable-length codes wherever substantial numbers of codewords are shared with a T-Code set. T-augmentation itself provides a means for assessing the contribution individual codewords make to the overall synchronization process for a T-Code set. Thus codeword differences between sets may be specifically evaluated to estimate the synchronization performance of a variable-length code set from a closely related T-Code set  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号