首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
提出了一种新的低密度奇偶校验(Low-Density Parity-Check,LDPC)码串行译码策略.该方法基于原有的LDPC码串行译码策略,根据来自信道的初始消息的可靠度对变量节点或校验节点进行均匀分组.对所提方法的误码率与平均迭代次数进行了分析.仿真结果表明:该策略的性能比原来的LDPC码串行译码策略有很大提高.  相似文献   

2.
Motivated by its success in decoding turbo codes, we provide an analysis of the belief propagation algorithm on the turbo decoding graph with Gaussian densities. In this context, we are able to show that, under certain conditions, the algorithm converges and that-somewhat surprisingly-though the density generated by belief propagation may differ significantly from the desired posterior density, the means of these two densities coincide. Since computation of posterior distributions is tractable when densities are Gaussian, use of belief propagation in such a setting may appear unwarranted. Indeed, our primary motivation for studying belief propagation in this context stems from a desire to enhance our understanding of the algorithm's dynamics in a non-Gaussian setting, and to gain insights into its excellent performance in turbo codes. Nevertheless, even when the densities are Gaussian, belief propagation may sometimes provide a more efficient alternative to traditional inference methods  相似文献   

3.
The adaptive belief propagation (ABP) algorithm was recently proposed by Jiang and Narayanan for the soft decoding of Reed-Solomon (RS) codes. In this paper, simplified versions of this algorithm are investigated for the turbo decoding of product codes. The complexity of the turbo-oriented adaptive belief propagation (TAB) algorithm is significantly reduced by moving the matrix adaptation step outside of the belief propagation iteration loop. A reduced-complexity version of the TAB algorithm that offers a trade-off between performance and complexity is also proposed. Simulation results for the turbo decoding of product codes show that belief propagation based on adaptive parity check matrices is a practical alternative to the currently very popular Chase-Pyndiah algorithm.  相似文献   

4.
The simplicity of decoding is one of the most important characteristics of the low density parity check (LDPC) codes. Belief propagation (BP) decoding algorithm is a well‐known decoding algorithm for LDPC codes. Most LDPC codes with long lengths have short cycles in their Tanner graphs, which reduce the performance of the BP algorithm. In this paper, we present 2 methods to improve the BP decoding algorithm for LDPC codes. In these methods, the calculation of the variable nodes is controlled by using “multiplicative correction factor” and “additive correction factor.” These factors are obtained for 2 separate channels, namely additive white Gaussian noise (AWGN) and binary symmetric channel (BSC), as 2 functions of code and channel parameters. Moreover, we use the BP‐based method in the calculation of the check nodes, which reduces the required resources. Simulation results show the proposed algorithm has better performance and lower decoding error as compared to BP and similar methods like normalized‐BP and offset‐BP algorithms.  相似文献   

5.
Previously, the belief propagation (BP) algorithm has received a lot of attention in the coding community, mostly due to its near-optimum decoding for low-density parity check (LDPC) codes and its connection to turbo decoding. In this paper, we investigate the performance achieved by the BP algorithm for decoding one-step majority logic decodable (OSMLD) codes. The BP algorithm is expressed in terms of likelihood ratios rather than probabilities, as conventionally presented. The proposed algorithm fits better the decoding of OSMLD codes with respect to its numerical stability due to the fact that the weights of their check sums are often much higher than that of the corresponding LDPC codes. Although it has been believed that OSMLD codes are far inferior to LDPC codes, we show that for medium code lengths (say between 200-1000 bits), the BP decoding of OSMLD codes can significantly outperform BP decoding of their equivalent LDPC codes. The reasons for this behavior are elaborated  相似文献   

6.
Non-uniform quantization for messages in Low-Density Parity-Check(LDPC)decoding can reduce implementation complexity and mitigate performance loss.But the distribution of messages varies in the iterative decoding.This letter proposes a variable non-uniform quantized Belief Propagation(BP)algorithm.The BP decoding is analyzed by density evolution with Gaussian approximation.Since the probability density of messages can be well approximated by Gaussian distribution,by the unbiased estimation of variance,the distribution of messages can be tracked during the iteration.Thus the non-uniform quantization scheme can be optimized to minimize the distortion.Simulation results show that the variable non-uniform quantization scheme can achieve better error rate performance and faster decoding convergence than the conventional non-uniform quantization and uniform quantization schemes.  相似文献   

7.
低密度奇偶校验码(LDPC)通过迭代译码算法进行译码,例如置信传播算法(belief-propagation)便是其中一种译码方式。标准BP算法是并行译码,在更新所有校验节点及比特节点过程中,使用上一次迭代的更新信息。为了提高一定迭代次数下的收敛速度,在研究不同算法的基础上,如Layered BP算法(LBP)和Shuffled BP算法(SBP),通过改变节点的更新顺序,提出了改进的shuffled迭代译码算法。相对于普通的SBP算法,文章所提改进型SBP算法是传统置信传播收敛速度的两倍,并且在保持性能的同时降低复杂度。最后给出了CMMB标准下LDPC码的仿真结果。  相似文献   

8.
Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed. Both versions are implemented with real additions only, which greatly simplifies the decoding complexity of belief propagation in which products of probabilities have to be computed. Also, these two algorithms do not require any knowledge about the channel characteristics. Both algorithms yield a good performance-complexity trade-off and can be efficiently implemented in software as well as in hardware, with possibly quantized received values  相似文献   

9.
In this letter, an iterative decoding algorithm for linear block codes combining reliability-based decoding with adaptive belief propagation decoding is proposed. At each iteration, the soft output values delivered by the adaptive belief propagation algorithm are used as reliability values to perform reduced order reliability-based decoding of the code considered. This approach allows to bridge the gap between the error performance achieved by the lower order reliability-based decoding algorithms which remain sub-optimum, and the maximum likelihood decoding, which is too complex to be implemented for most codes employed in practice. Simulations results for various linear block codes are given and elaborated.  相似文献   

10.
In this paper, we propose a belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm. The normalization factors can be obtained not only by simulation, but also, importantly, theoretically. This new BP-based algorithm is much simpler to implement than BP decoding as it requires only additions of the normalized received values and is universal, i.e., the decoding is independent of the channel characteristics. Some simulation results are given, which show this new decoding approach can achieve an error performance very close to that of BP on the additive white Gaussian noise channel, especially for low-density parity check (LDPC) codes whose check sums have large weights. The principle of normalization can also be used to improve the performance of the max-log-MAP algorithm in turbo decoding, and some coding gain can be achieved if the code length is long enough  相似文献   

11.
For linear block codes without a sparse graph representation, there exists an iterative decoding algorithm which combines the traditional reliability-based decoding (RBD) with adaptive belief propagation (ABP) to achieve a good tradeoff between the error performance and decoding complexity. However, in the original design of the iterative scheme, only a one-way flow of soft-information from the ABP-part to the RBD-part is available, hence limiting the performance of iterative processing. In this study, several low-complexity schemes are presented for the RBD-part to produce desirable soft-outputs such that decoded results can be bilaterally exchanged between both of the RBD and ABP parts. Simulation results also verify the superiority of the proposed idea over the conventional design.  相似文献   

12.
Recent studies have shown that sequential belief propagation decoding of low-density parity-check codes can increase the decoding convergence speed while simultaneously improving the asymptotic performance compared to the conventional flooding scheme. Two of the practical sequential decoding schemes known are the ones by Casado et al. [1] in which informed dynamic scheduling is used for scheduling the sequential updates of the messages. In this letter, we propose a two-staged informed dynamic scheduling that unifies and outperforms the two schemes of [1].  相似文献   

13.
性能接近最优的LDPC码LLR-BP简化译码算法   总被引:1,自引:0,他引:1  
讨论了LDPC码的LLR-BP译码算法,对多项式拟合和偏移量近似两种简化算法进行了研究,通过仿真的方法确定了近似参数;算法仿真结果表明,这两种简化算法不仅易于硬件实现,而且性能接近最优。  相似文献   

14.
针对中高信噪比(SNR)下低密度奇偶校验(LDPC)译码错误振荡迭代不收敛,提出了基于置信传播(BP)算法的修正LDPC译码算法,即软值归零BP算法。该算法通过将振荡迭代的变量节点传递的外信息置零,减少错误信道消息对迭代译码的影响,较大地改善了译码性能。而且,还给出了振荡迭代节点的判定准则,提高了振荡迭代节点判定的准确性。仿真结果表明,在中高信噪比区且译码迭代次数相同的情况下,该算法能比BP算法获得更好的译码性能。  相似文献   

15.
We derive both upper and lower bounds on the decoding error probability of maximum-likelihood (ML) decoded low-density parity-check (LDPC) codes. The results hold for any binary-input symmetric-output channel. Our results indicate that for various appropriately chosen ensembles of LDPC codes, reliable communication is possible up to channel capacity. However, the ensemble averaged decoding error probability decreases polynomially, and not exponentially. The lower and upper bounds coincide asymptotically, thus showing the tightness of the bounds. However, for ensembles with suitably chosen parameters, the error probability of almost all codes is exponentially decreasing, with an error exponent that can be set arbitrarily close to the standard random coding exponent  相似文献   

16.
In this letter, we propose two modifications to belief propagation (BP) decoding algorithm. The modifications are based on reducing the reliability of messages throughout the iteration process, and are particularly effective for short low-density parity-check codes, where the existence of cycles makes the original BP algorithm perform suboptimal. The proposed algorithms, referred to as "normalized BP" and "offset BP," reduce the absolute value of the outgoing log-likelihood ratio messages at variable nodes by using a multiplicative factor and an additive factor, respectively. Simulation results show that both algorithms perform more or less the same, and both outperform BP in error performance.  相似文献   

17.
Although sequential decoding of convolutional codes gives a very small decoding error probability, the overall reliability is limited by the probability PG of deficient decoding, the term introduced by Jelinek to refer to decoding failures caused mainly by buffer overflow. The number of computational efforts in sequential decoding has the Pareto distribution and it is this "heavy tailed" distribution that characterizes PG. The heavy tailed distribution appears in many fields and buffer overflow is a typical example of the behaviors in which the heavy tailed distribution plays an important role. In this paper, we give a new bound on a probability in the tail of the heavy tailed distribution and, using the bound, prove the long-standing conjecture on PG, that is, PG ap constanttimes1/(sigmarhoNrho-1) for a large speed factor sigma of the decoder and for a large receive buffer size N whenever the coding rate R and rho satisfy E(rho)=rhoR for 0 les rho les 1  相似文献   

18.
Bounds on the error probability of maximum likelihood decoding of a binary linear code are considered. The bounds derived use the weight spectrum of the code and they are tighter than the conventional union bound in the case of large noise in the channel. The bounds derived are applied to a code with an average spectrum, and the result is compared to the random coding exponent. The author shows that the bound considered for the binary symmetrical channel case coincides asymptotically with the random coding bound. For the case of AWGN channel the author shows that Berlekamp's (1980) tangential bound can be improved, but even this improved bound does not coincide with the random coding bound, although it can be very close to it  相似文献   

19.
The use of chaotic codes in transmission systems presents many advantages, not only in term of security, but also to combat multi-path propagations and to allow multiple access. Nevertheless, the main problem of communication with chaos is the design of an experimental and real-time synchronization decoder between transmitter and receiver. In this paper, we suggest to use the belief propagation algorithm as a new approach for synchronizing quasi-chaotic signals. In this approach, the transmitter contains a digital chaotic oscillator which is perturbed by the information signal. The receiver consists in a dual system augmented with a belief propagation processing, whose aim is to recover the information signal. We suppose that the channel is Gaussian and synchronization is forced in a first step. Once synchronization is achieved, the information signal modulated the chaotic system and is transmitted on the Gaussian channel. An adaptative belief propagation algorithm is processed to recover the signal information.  相似文献   

20.
The performance of either structured or random turbo-block codes and binary, systematic block codes operating over the additive white Gaussian noise (Awgn) channel, is assessed by upper bounds on the error probalities of maximum likelihood (Ml) decoding. These bounds on the block and bit error probability which depend respectively on the distance spectrum and the input-output weight enumeration function (Iowef) of these codes, are compared, for a variety of cases, to simulated performance of iterative decoding and also to some reported simulated lower bounds on the performance ofMl decoders. The comparisons facilitate to assess the efficiency of iterative decoding (as compared to the optimalMl decoding rule) on one hand and the tightness of the examined upper bounds on the other. We focus here on uniformly interleaved and parallel concatenated turbo-Hamming codes, and to that end theIowefs of Hamming and turbo-Hamming codes are calculated by an efficient algorithm. The usefulness of the bounds is demonstrated for uniformly interleaved turbo-Hamming codes at rates exceeding the cut-off rate, where the results are compared to the simulated performance of iteratively decoded turbo-Hamming codes with structured and statistical interleavers. We consider also the ensemble performance of ‘repeat and accumulate’ (Ka) codes, a family of serially concatenated turbo-block codes, introduced by Divsalar, Jin and McEliece. Although, the outer and inner codes possess a very simple structure: a repetitive and a differential encoder respectively, our upper bounds indicate impressive performance at rates considerably beyond the cut-off rate. This is also evidenced in literature by computer simulations of the performance of iteratively decodedRa codes with a particular structured interleaver.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号