首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We define and apply a new algorithm called the iterative Viterbi decoding algorithm (IVA) to decode a high-rate parity-concatenated TCM system in which a trellis code is used as the inner code and a simple parity-check code is used as the outer code. With trellis shaping, the IVA can achieve a performance 1.25 dB away from the Shannon limit at a BER of 3×10-5 with low complexity. By augmenting the system with a binary BCH code, the error floor can be reduced to 10-9 with very little additional cost  相似文献   

2.
Using linear programming to Decode Binary linear codes   总被引:3,自引:0,他引:3  
A new method is given for performing approximate maximum-likelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor graph or parity-check representation of the code. The resulting "LP decoder" generalizes our previous work on turbo-like codes. A precise combinatorial characterization of when the LP decoder succeeds is provided, based on pseudocodewords associated with the factor graph. Our definition of a pseudocodeword unifies other such notions known for iterative algorithms, including "stopping sets," "irreducible closed walks," "trellis cycles," "deviation sets," and "graph covers." The fractional distance d/sub frac/ of a code is introduced, which is a lower bound on the classical distance. It is shown that the efficient LP decoder will correct up to /spl lceil/d/sub frac//2/spl rceil/-1 errors and that there are codes with d/sub frac/=/spl Omega/(n/sup 1-/spl epsi//). An efficient algorithm to compute the fractional distance is presented. Experimental evidence shows a similar performance on low-density parity-check (LDPC) codes between LP decoding and the min-sum and sum-product algorithms. Methods for tightening the LP relaxation to improve performance are also provided.  相似文献   

3.
A scheme is proposed to decode a tail-biting convolutional code based on its Tanner graph, which is traditionally done using a forward-backward MAP algorithm. Therefore, decoding may be performed using a standard sum-product algorithm. With respect to decoding based on trellis, all variables in a Tanner graph are binary, which may lead to complexity reduction. A min-sum algorithm is used to decrease the analogue circuit complexity. Simulation shows there is no significant degradation compared with more complex traditional methods  相似文献   

4.
研究了一种联合低密度校验(LDPC,Low-Density Parity-Check)码和酉空时调制(USTM,Unitary Space-Time Modulation)技术在不相关瑞利平坦衰落(Rayleigh flat fading)下的多输入多输出信道(MIMO,Multiple-Input Multiple-Output)系统的性能.在无信道状态信息下,采用可并行操作的和积译码算法(SPA,Sum-Product Algorithm)的LDPCC-USTM级联系统具有优异的性能,并分析了不同LDPC码集下对系统性能的影响.仿真结果表明LDPCC-USTM级联系统比与未级联的相比有近23dB的编码增益,与基于Turbo码的USTM[6]系统相比有5dB多的编码增益,且基于非规则的LDPC码的级联系统比基于规则码有近1dB的编码增益.  相似文献   

5.
Two soft-decision decoding algorithms for the (6, 3, 4) quaternary code hexacode are presented. Both algorithms realize half the minimum Euclidean distance of the code. The proposed algorithms are most practical. In using them, bounded-distance decoding of the Golay code and the Leech lattice are performed with at most 187 and 519 real-number operations respectively. Compare this to 651, respectively 3595, operations required by the best known maximum likelihood decoders (Vardy and Be'ery, 1991, 1993), and 431, respectively 1007, operations required by the bounded-distance decoders (Amrani et al., 1994). We present some simulation results for the proposed Leech lattice decoders revealing near-optimal performance. A comparison to known trellis codes is also provided  相似文献   

6.
The Viterbi algorithm (1967) and conventional serial concatenated codes (CSCC) have been widely applied in digital communication systems over the last 30 years. We show that the Shannon capacity of additive white Gaussian noise (AWGN) channels can be approached by CSCCs and the iterative VA (IVA). We firstly study the algebraic properties of CSCCs. We then present the IVA to decode these codes. We also analyze the performance of the IVA and conclude that a better performance can be achieved if we replace the powerful block codes by some simple parity codes. One of the key results in this paper shows that by using a proper design for the decoding method, codes with small loops can be very efficiently decoded using a min-sum type algorithm. The numerical results show that the IVA can closely approach the Shannon sphere-packing lower bound and the Shannon limit. For block sizes ranging from 56 information bits to 11970 information bits, the IVA can perform to within about 1 dB of the Shannon sphere-packing lower bound at a block error rate of 10-4. We show that the IVA has a very low complexity and can be applied to many current standard systems, for example, the Qualcomm code-division multiple-access (CDMA) system and the NASA concatenated system, with very little modification or, for some cases, without any modification  相似文献   

7.
朱嘉  张海滨  潘宇 《电讯技术》2006,46(5):94-97
在LDPC码的译码算法中,和积算法性能最优但复杂性较高,最小和算法实现简单但性能与和积算法相差较多。针对这一性能与复杂度的矛盾,带有修正项的最小和算法成为研究的热点问题。文中基于一种性能与和积算法接近的修正最小和算法进行研究,对修正项的修正方式进行了简化,简化后的算法在性能上与和积算法仍非常接近,实现复杂度却比原修正最小和算法有明显的降低。  相似文献   

8.
Conventional iterative decoding with flooding or parallel schedule can be formulated as a fixed-point problem solved iteratively by a successive substitution (SS) method. In this paper, we investigate the dynamics of a continuous-time (asynchronous) analog implementation of iterative decoding, and show that it can be approximated as the application of the well-known successive relaxation (SR) method for solving the fixed-point problem. We observe that SR with the optimal relaxation factor can considerably improve the error-rate performance of iterative decoding for short low-density parity-check (LDPC) codes, compared with SS. Our simulation results for the application of SR to belief propagation (sum-product) and min-sum algorithms demonstrate improvements of up to about 0.7 dB over the standard SS for randomly constructed LDPC codes. The improvement in performance increases with the maximum number of iterations, and by accordingly reducing the relaxation factor. The asymptotic result, corresponding to an infinite maximum number of iterations and infinitesimal relaxation factor, represents the steady-state performance of analog iterative decoding. This means that under ideal circumstances, continuous-time (asynchronous) analog decoders can outperform their discrete-time (synchronous) digital counterparts by a large margin. Our results also indicate that with the assumption of a truncated Gaussian distribution for the random delays among computational modules, the error-rate performance of the analog decoder, particularly in steady state, is rather independent of the variance of the distribution. The proposed simple model for analog decoding, and the associated performance curves, can be used as an "ideal analog decoder" benchmark for performance evaluation of analog decoding circuits.  相似文献   

9.
The bit-error rate (BER) performance of new iterative decoding algorithms (e,g,, turbodecoding) is achieved at the expense of a computationally burdensome decoding procedure. We present a method called early detection that can be used to reduce the computational complexity of a variety of iterative decoders. Using a confidence criterion, some information symbols, state variables, and codeword symbols are detected early on during decoding. In this way, the computational complexity of further processing is reduced with a controllable increase in the BER. We present an easily implemented instance of this algorithm, called trellis splicing, that can be used with turbodecoding. For a simulated system of this type, we obtain a reduction in the computational complexity of up to a factor of four, relative to a turbodecoder that obtains the same increase in the BER by performing fewer iterations  相似文献   

10.
The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both decoders perform repeated decoding trials and decoding information is exchanged between them  相似文献   

11.
曾蓉  梁钊 《电讯技术》2004,44(6):93-96
LDPC码是一种可以接近香农限的线性分组码,可通过稀疏奇偶校验矩阵来构造。也可以用因子图来构成。根据LDPC码的不同构成方法至今已提出了数种不同的译码方法。本文介绍了基于因子图的LDPC码的构造方法,分析了和一积(SPA)译码算法的基本原理,最后详细讨论了用SPA算法对LDPC码进行译码的过程。  相似文献   

12.
Codes on graphs: normal realizations   总被引:2,自引:0,他引:2  
A generalized state realization of the Wiberg (1996) type is called normal if symbol variables have degree 1 and state variables have degree 2. A natural graphical model of such a realization has leaf edges representing symbols, ordinary edges representing states, and vertices representing local constraints. Such a graph can be decoded by any version of the sum-product algorithm. Any state realization of a code can be put into normal form without essential change in the corresponding graph or in its decoding complexity. Group or linear codes are generated by group or linear state realizations. On a cycle-free graph, there exists a well-defined minimal canonical realization, and the sum-product algorithm is exact. However, the cut-set bound shows that graphs with cycles may have a superior performance-complexity tradeoff, although the sum-product algorithm is then inexact and iterative, and minimal realizations are not well-defined. Efficient cyclic and cycle-free realizations of Reed-Muller (RM) codes are given as examples. The dual of a normal group realization, appropriately defined, generates the dual group code. The dual realization has the same graph topology as the primal realization, replaces symbol and state variables by their character groups, and replaces primal local constraints by their duals. This fundamental result has many applications, including to dual state spaces, dual minimal trellises, duals to Tanner (1981) graphs, dual input/output (I/O) systems, and dual kernel and image representations. Finally a group code may be decoded using the dual graph, with appropriate Fourier transforms of the inputs and outputs; this can simplify decoding of high-rate codes  相似文献   

13.
Minimum mean square error (MMSE) decoding in a large-scale sensor network which employs distributed quantization is considered. Given that the computational complexity of the optimal decoder is exponential in the network size, we present a framework based on Bayesian networks for designing a near-optimal decoder whose complexity is only linear in network size (hence scalable). In this approach, a complexity-constrained factor graph, which approximately represents the prior joint distribution of the sensor outputs, is obtained by constructing an equivalent Bayesian network using the maximum likelihood (ML) criterion. The decoder executes the sum-product algorithm on the simplified factor graph. Our simulation results have shown that the scalable decoders constructed using the proposed approach perform close to optimal, with both Gaussian and non-Gaussian sensor data.  相似文献   

14.
Low-rate turbo-Hadamard codes   总被引:2,自引:0,他引:2  
This paper is concerned with a class of low-rate codes constructed from Hadamard code arrays. A recursive encoding principle is employed to introduce an interleaving gain. Very simple trellis codes with only two or four states are sufficient for this purpose, and the decoding cost involved in the trellis part is typically negligible. Both simulation and analytical results are provided to demonstrate the advantages of the proposed scheme. The proposed scheme is of theoretical interest as it can achieve performance of BER=10/sup -5/ at E/sub b//N/sub 0//spl ap/-1.2dB (only about 0.4 dB away from the ultimate low-rate Shannon limit) with an information block size of 65534. To the authors' knowledge, this is the best result achieved to date with respect to the ultimate Shannon limit. With regard to practical issues, the decoding complexity of the proposed code is considerably lower than that of existing low-rate turbo-type codes with comparable performance.  相似文献   

15.
This brief examines different parity-check node decoding algorithms for low-density parity-check (LDPC) codes, seeking to recoup the performance loss incurred by the min-sum approximation compared to sum-product decoding. Two degree-matched check node decoding approximations that depend on the check node degree dc are presented. Both have low complexity and can be applied to any degree distribution. Simulation results show near sum-product decoding performance for both degree-matched check node approximations for regular and irregular LDPCs  相似文献   

16.
In this letter, we show that a concatenated zigzag code can be viewed as a low-density parity-check (LDPC) code. Based on the bipartite graph representation for such a parallel-concatenated code, various sum-product based decoding algorithms are introduced and compared. The results show that the improved versions of sum-product algorithm exhibit better convergence rate while maintaining the essential parallel form.  相似文献   

17.
We study a wide range of graph-based message-passing schedules for iterative decoding of low-density parity-check (LDPC) codes. Using the Tanner graph (TG) of the code and for different nodes and edges of the graph, we relate the first iteration in which the corresponding messages deviate from their optimal value (corresponding to a cycle-free graph) to the girths and the lengths of the shortest closed walks in the graph. Using this result, we propose schedules, which are designed based on the distribution of girths and closed walks in the TG of the code, and categorize them as node based versus edge based, unidirectional versus bidirectional, and deterministic versus probabilistic. These schedules, in some cases, outperform the previously known schedules, and in other cases, provide less complex alternatives with more or less the same performance. The performance/complexity tradeoff and the best choice of schedule appear to depend not only on the girth and closed-walk distributions of the TG, but also on the iterative decoding algorithm and channel characteristics. We examine the application of schedules to belief propagation (sum-product) over additive white Gaussian noise (AWGN) and Rayleigh fading channels, min-sum (max-sum) over an AWGN channel, and Gallager's algorithm A over a binary symmetric channel.  相似文献   

18.
Probabilistic algorithms are given for constructing good large constraint length trellis codes for use with sequential decoding that can achieve the channel cutoff rate bound at a bit error rate (BER) of 10-5-10-6. The algorithms are motivated by the random coding principle that an arbitrary selection of code symbols will produce a good code with high probability. One algorithm begins by choosing a relatively small set of codes randomly. The error performance of each of these codes is evaluated using sequential decoding and the code with the best performance among the chosen set is retained. Another algorithm treats the code construction as a combinatorial optimization problem and uses simulated annealing to direct the code search. Trellis codes for 8 PSK and 16 QAM constellations with constraint lengths v up to 20 are obtained. Simulation results with sequential decoding show that these codes reach the channel cutoff rate bound at a BER of 10-5-10-6 and achieve 5.0-6.35 dB real coding gains over uncoded systems with the same spectral efficiency and up to 2.0 dB real coding gains over 64 state trellis codes using Viterbi decoding  相似文献   

19.
Joint source/channel decoders that use the residual redundancy in the source are investigated for differential pulse code modulation (DPCM) picture transmission over a binary symmetric channel. Markov sequence decoders employing the Viterbi algorithm that use first-order source statistics are reviewed, and generalized for decoders that use second-order source statistics. To make optimal use of the source correlation in both horizontal and vertical directions, it is necessary to generalize the conventional Viterbi decoding algorithm for a one higher-dimensional trellis. The paths through the trellis become two-dimensional "sheets", thus, the technique is coined "sheet decoding". By objective [reconstruction signal-to-noise ratio (SNR)] and subjective measure, it is found that the sheet decoders outperform the Markov sequence decoders that use a first-order Markov model, and outperform two other known decoders (modified maximum a posteriori probability and maximal SNR) that use a second-order Markov model. Moreover, it is found that the use of a simple rate-2/3 block code in conjunction with Markov model-aided decoding (MMAD) offers significant performance improvement for a 2-bit DPCM system. For the example Lenna image, it is observed that the rate-2/3 block code is superior to a rate-2/3 convolutional code for channel-error rates higher than 0.035. The block code is easily incorporated into any of the MMAD DPCM systems and results in a 2-bit MMAD DPCM system that significantly outperforms the noncoded 3-bit MMAD DPCM systems for channel-error rates higher than 0.04.  相似文献   

20.
An iterative trellis search technique is described for the maximum-likelihood (ML) soft decision decoding of block codes. The proposed technique derives its motivation from the fact that a given block code may be a subcode for a parent code whose associated trellis has substantially fewer edges. Through the use of list-Viterbi (1967) decoding and an iterative algorithm, the proposed technique allows for the use of a trellis for the parent code in the ML decoding of the desired subcode. Complexity and performance analyses, as well as details of potential implementations, indicate a substantial reduction in decoding complexity for linear block codes of practical length while achieving ML or near-ML soft decision performance  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号