首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A class of orthogonal convolutional codes using a multi shift-register encoder and featuring self-doubly-orthogonal properties is analyzed under iterative decoding. The lower bounds of error performances of these codes can be approached within typically three to five iterations at moderate signal-to-noise ratios using either iterative threshold (TH) decoding or belief propagation (BP) decoding. Compared with iterative BP decoding, it is shown that iterative threshold decoding for these codes yields a much lower complexity at the same decoding latency.  相似文献   

2.
空间耦合LDPC(Spatially Coupled LDPC,SC-LDPC)码由于阈值饱和特性,被证明是未来无线通信系统的有力候选码型。SC-LDPC码是一种卷积LDPC码,在二元无记忆对称信道下采用置信传播译码算法时具有逼近香农限的性能。对SC-LDPC码的构造及其经典的置信传播译码算法进行了阐述,并在加性高斯白噪声信道下进行了性能仿真和分析。仿真结果表明,SC-LDPC码的约束长度越长或最大迭代次数越大,其性能就越逼近香农容量限。SC-LDPC码在误码率为10-5、最大迭代次数为100时,码长20000比码长10000大约有0.68 dB的增益;在误码率为10-5、码长为10000时,最大迭代次数100的SC-LDPC码比最大迭代次数10的码大约有0.66 dB的增益。仿真结果有效验证了SCLDPC码在无线通信系统中的良好性能。  相似文献   

3.
The puncturing technique allows obtaining high-rate convolutional codes from low-rate convolutional codes used as mother codes. This technique has been successfully applied to generate good high-rate convolutional codes which are suitable for Viterbi and sequential decoding. In this paper, we investigate the puncturing technique for convolutional self-doubly orthogonal codes (CSO/sup 2/C) which are decoded using an iterative threshold-decoding algorithm. Based on an analysis of iterative threshold decoding of the rate-R=b/(b+1) punctured systematic CSO/sup 2/C, the required properties of the rate-R=1/2 systematic convolutional codes (SCCs) used as mother codes are derived. From this analysis, it is shown that there is no need for the punctured mother codes to respect all the required conditions, in order to maintain the double orthogonality at the second iteration step of the iterative threshold-decoding algorithm. The results of the search for the appropriate rate-R=1/2 SCCs used as mother codes to yield a large number of punctured codes of rates 2/3/spl les/R/spl les/6/7 are presented, and some of their error performances evaluated.  相似文献   

4.
In this letter, an iterative decoding algorithm for linear block codes combining reliability-based decoding with adaptive belief propagation decoding is proposed. At each iteration, the soft output values delivered by the adaptive belief propagation algorithm are used as reliability values to perform reduced order reliability-based decoding of the code considered. This approach allows to bridge the gap between the error performance achieved by the lower order reliability-based decoding algorithms which remain sub-optimum, and the maximum likelihood decoding, which is too complex to be implemented for most codes employed in practice. Simulations results for various linear block codes are given and elaborated.  相似文献   

5.
6.
Under severely unreliable channel, decoding of error‐correcting codes frequently fails, which requires a lot of computational complexity, especially, in the iterative decoding algorithm. In hybrid automatic repeat request systems, most of computation power is wasted on failed decoding if a codeword is retransmitted many times. Therefore, early stopping of iterative decoding needs to be adopted. In this paper, we propose a new stopping algorithm of iterative belief propagation decoding for low‐density parity‐check codes, which is effective on both high and low signal‐to‐noise ratio ranges and scalable to variable code rate and length. The proposed stopping algorithm combines several good stopping criteria. Each criterion is extremely simple and will not be a burden to the overall system. With the proposed stopping algorithm, it is shown via numerical analysis that the decoding complexity of hybrid automatic repeat request system with adaptive modulation and coding scheme can be fairly reduced. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
OFDM是一项能有效对抗高速无线通信中多径衰落的关键技术,为了进一步提高OFDM系统的误码性能,许多信道编码技术已被应用于OFDM系统中,二元域LDPC码以其近香农限的误码性能和较低的译码复杂度成为研究的热点。在AWGN信道下,多元域LDPC码比等效码长的二元域LDPC码有更好的纠错性能。本文提出了一种将多元域LDPC码经过MPSK调制后用于OFDM系统的新方法。仿真结果表明,在多径衰落信道下,通过合理选择多元LDPC码域的阶数和调制的方法,多元域LDPC编码的高阶调制OFDM系统比等效码长的二元域LDPC编码OFDM系统具有更好的性能,并且由于采用了多元域LDPC的快速BP译码,译码复杂度只是稍有增加。  相似文献   

8.
针对RS码与LDPC码的串行级联结构,提出了一种基于自适应置信传播(ABP)的联合迭代译码方法.译码时,LDPC码置信传播译码器输出的软信息作为RS码ABP译码器的输入;经过一定迭代译码后,RS码译码器输出的软信息又作为LDPC译码器的输入.软输入软输出的RS译码器与LDPC译码器之间经过多次信息传递,译码性能有很大提高.码长中等的LDPC码采用这种级联方案,可以有效克服短环的影响,消除错误平层.仿真结果显示:AWGN信道下这种基于ABP的RS码与LDPC码的联合迭代译码方案可以获得约0.8 dB的增益.  相似文献   

9.
In this paper, reliability based decoding is combined with belief propagation (BP) decoding for low-density parity check (LDPC) codes. At each iteration, the soft output values delivered by the BP algorithm are used as reliability values to perform reduced complexity soft decision decoding of the code considered. This approach allows to bridge the error performance gap between belief propagation decoding which remains suboptimum, and maximum likelihood decoding which is too complex to be implemented for the codes considered. Trade-offs between decoding complexity and error performance are also investigated. In particular, a stopping criterion which reduces the average number of iterations at the expense of very little performance degradation is proposed for this combined decoding approach. Simulation results for several Gallager (1963, 1968) LDPC codes and different set cyclic codes of hundreds of information bits are given and elaborated  相似文献   

10.
A novel iterative error control technique based on the threshold decoding algorithm and new convolutional self-doubly orthogonal codes is proposed. It differs from parallel concatenated turbo decoding as it uses a single convolutional encoder, a single decoder and hence no interleaver, neither at encoding nor at decoding. Decoding is performed iteratively using a single threshold decoder at each iteration, thereby providing good tradeoff between complexity, latency and error performance.  相似文献   

11.
We describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. (1993) and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl's (1982) belief propagation algorithm. We see that if Pearl's algorithm is applied to the “belief network” of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the experimental performance of turbo decoding is still lacking. However, we also show that Pearl's algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager's (1962) low-density parity-check codes, serially concatenated codes, and product codes. Thus, belief propagation provides a very attractive general methodology for devising low-complexity iterative decoding algorithms for hybrid coded systems  相似文献   

12.
This paper presents several results involving Fano's sequential decoding algorithm for convolutional codes. An upper bound to theath moment of decoder computation is obtained for arbitrary decoder biasBanda leq 1. An upper bound on error probability with sequential decoding is derived for both systematic and nonsystematic convolutional codes. This error bound involves the exact value of the decoder biasB. It is shown that there is a trade-off between sequential decoder computation and error probability as the biasBis varied. It is also shown that for many values ofB, sequential decoding of systematic convolutional codes gives an exponentially larger error probability than sequential decoding of nonsystematic convolutional codes when both codes are designed with exponentially equal optimum decoder error probabilities.  相似文献   

13.
李智鹏  窦高奇  邓小涛 《信号处理》2021,37(6):1086-1092
咬尾是一种将卷积码转换为块码的技术,它消除了归零状态所造成的码率损失,同时避免了截尾带来的性能降低,在短块编码中具有明显优势.针对咬尾卷积码(TBCC)现有译码算法复杂度过大和收敛性问题,提出一种低复杂度的TBCC自适应循环维特比(VA)译码算法.该算法根据信道变化自适应调整译码迭代次数,使咬尾路径收敛到最佳.通过仿真...  相似文献   

14.
Capacity-approaching protograph codes   总被引:1,自引:0,他引:1  
This paper discusses construction of protographbased low-density parity-check (LDPC) codes. Emphasis is placed on protograph ensembles whose typical minimum distance grows linearly with block size. Asymptotic performance analysis for both weight enumeration and iterative decoding threshold determination is provided and applied to a series of code constructions. Construction techniques that yield both low thresholds and linear minimum distance growth are introduced by way of example throughout. The paper also examines implementation strategies for high throughput decoding derived from first principles of belief propagation on bipartite graphs.  相似文献   

15.
We propose a novel class of provably good codes which are a serial concatenation of a single-parity-check (SPC)-based product code, an interleaver, and a rate-1 recursive convolutional code. The proposed codes, termed product accumulate (PA) codes, are linear time encodable and linear time decodable. We show that the product code by itself does not have a positive threshold, but a PA code can provide arbitrarily low bit-error rate (BER) under both maximum-likelihood (ML) decoding and iterative decoding. Two message-passing decoding algorithms are proposed and it is shown that a particular update schedule for these message-passing algorithms is equivalent to conventional turbo decoding of the serial concatenated code, but with significantly lower complexity. Tight upper bounds on the ML performance using Divsalar's (1999) simple bound and thresholds under density evolution (DE) show that these codes are capable of performance within a few tenths of a decibel away from the Shannon limit. Simulation results confirm these claims and show that these codes provide performance similar to turbo codes but with significantly less decoding complexity and with a lower error floor. Hence, we propose PA codes as a class of prospective codes with good performance, low decoding complexity, regular structure, and flexible rate adaptivity for all rates above 1/2.  相似文献   

16.
A new sequential decoding algorithm with an adjustable threshold and a new method of moving through the decoding tree is proposed. Instead of the path metric of the conventional sequential decoding algorithms, the proposed algorithm uses a branch metric based on maximum-likelihood criterion. Two new parameters, the jumping-back distance and going-back distance, are also introduced. The performance of the algorithm for long constraint length convolutional codes is compared to those of the other sequential decoding algorithms and the Viterbi algorithm. The results show that the proposed algorithm is a good candidate for decoding of convolutional codes due to its fast decoding capability and good bit error rate (BER) performance. This work was supported in part by the Research Foundation at Karadeniz Technical University under Grant 2004.112.004.01 and 2005.112.009.2.  相似文献   

17.
We introduce a new method for decoding short and moderate-length linear block codes with dense paritycheck matrix representations of cyclic form. This approach is termed multiple-bases belief-propagation. The proposed iterative scheme makes use of the fact that a code has many structurally diverse parity-check matrices, capable of detecting different error patterns. We show that this inherent code property leads to decoding algorithms with significantly better performance when compared to standard belief-propagation decoding. Furthermore, we describe how to choose sets of parity-check matrices of cyclic form amenable for multiple-bases decoding, based on analytical studies performed for the binary erasure channel. For several cyclic and extended cyclic codes, the multiple-bases belief propagation decoding performance can be shown to closely follow that of the maximum-likelihood decoder.  相似文献   

18.
该文采用数值仿真的方法探讨了MIMO系统中采用低密度校验(LDPC)码作为信道编码后的系统性能,针对LDPC码的置信度传播译码算法,提出了基于因子图(Factor graph)的联合迭代检测译码最大后验概率(MAP)算法,分析比较了发射端分别采用独立编码和联合编码对系统性能的影响,仿真结果表明,LDPC码可以充分利用MIMO系统中空间分集和时间分集性能提高系统的有效分集增益,并且联合迭代检测译码算法对这两种发射端编码结构的系统性能增益有较大差别。  相似文献   

19.
Since the proposal of turbo codes in 1993, many studies have appeared on this simple and new type of codes which give a powerful and practical performance of error correction. Although experimental results strongly support the efficacy of turbo codes, further theoretical analysis is necessary, which is not straightforward. It is pointed out that the iterative decoding algorithm of turbo codes shares essentially similar ideas with low-density parity-check (LDPC) codes, with Pearl's belief propagation algorithm applied to a cyclic belief diagram, and with the Bethe approximation in statistical physics. Therefore, the analysis of the turbo decoding algorithm will reveal the mystery of those similar iterative methods. In this paper, we recapture and extend the geometrical framework initiated by Richardson to the information geometrical framework of dual affine connections, focusing on both of the turbo and LDPC decoding algorithms. The framework helps our intuitive understanding of the algorithms and opens a new prospect of further analysis. We reveal some properties of these codes in the proposed framework, including the stability and error analysis. Based on the error analysis, we finally propose a correction term for improving the approximation.  相似文献   

20.
A symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes applying reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased. All requirements for iterative decoding schemes are fulfilled. Since tail-biting convolutional codes are equivalent to quasi-cyclic block codes, the decoding algorithm for truncated or terminated convolutional codes is modified to obtain a soft-in/soft-out decoder for high-rate quasi-cyclic block codes which also uses the dual code because of complexity reasons. Additionally, quasi-cyclic block codes are investigated as component codes for parallel concatenation. Simulation results obtained by iterative decoding are compared with union bounds for maximum likelihood decoding. The results of a search for high-rate quasi-cyclic block codes are given in the appendix  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号