共查询到20条相似文献,搜索用时 15 毫秒
1.
Fossorier M.P.C. Shu Lin 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2000,46(3):1046-1053
This paper investigates the principle of metric differences for trellis decoding of convolutional codes. Based on this differential method, a new algorithm, referred to as differential trellis decoding (DTD), is proposed. DTD offers an alternative to the conventional “add-compare-select” (ACS) method for implementing the Viterbi algorithm 相似文献
2.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1970,16(6):739-745
A one-step threshold decoding method previously presented for cyclic block codes is shown to apply generally to linear convolutional codes. It is further shown that this method generalizes in a natural way to allow decoding of the received sequence in its unquantized analog form. 相似文献
3.
Riedel S. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1998,44(3):1176-1187
A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased for codes of rate greater than 1/2. The discussed algorithms fulfil all requirements for iterative (“turbo”) decoding schemes. Simulation results are presented for high-rate parallel concatenated convolutional codes (“turbo” codes) using an AWGN channel or a perfectly interleaved Rayleigh fading channel. It is shown that iterative decoding of high-rate codes results in high-gain, moderate-complexity coding 相似文献
4.
Bocharova I.E. Handlery M. Johannesson R. Kudryashov B.D. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2005,51(5):1880-1891
BEAST is a bidirectional efficient algorithm for searching trees. In this correspondence, BEAST is extended to maximum-likelihood (ML) decoding of block codes obtained via convolutional codes. First it is shown by simulations that the decoding complexity of BEAST is significantly less than that of the Viterbi algorithm. Then asymptotic upper bounds on the BEAST decoding complexity for three important ensembles of codes are derived. They verify BEAST's high efficiency compared to other algorithms. For high rates, the new asymptotic bound for the best ensemble is in fact better than previously known bounds. 相似文献
5.
Considers trellis decoding of convolutional codes with selectable effort, as measured by decoder complexity. Decoding is described for single parent codes with a variety of complexities, with performance “near” that of the optimal fixed receiver complexity coding system. Effective free distance is examined. Criteria are proposed for ranking parent codes, and some codes found to be best according to the criteria are tabulated, Several codes with effective free distance better than the best code of comparable complexity were found. Asymptotic (high SNR) performance analysis and error propagation are discussed. Simulation results are also provided 相似文献
6.
We present a method for soft-in/soft-out sequential decoding of recursive systematic convolutional codes. The proposed decoder, the twin-stack decoder, is an extension of the well-known ZJ stack decoder, and it uses two stacks. The use of the two stacks lends itself to the generation of soft outputs, and the decoder is easily incorporated into the iterative “turbo” configuration. Under thresholded decoding, it is observed that the decoder is capable of achieving near-maximum a posteriori bit-error rate performance at moderate to high signal-to-noise ratios (SNRs). Also, in the iterative (turbo) configuration, at moderate SNRs (above 2.0 dB), the performance of the proposed decoder is within 1.5 dB of the BCJR algorithm for a 16-state, R=1/3, recursive code, but this difference narrows progressively at higher SNRs. The complexity of the decoder asymptotically decreases (with SNR) as 1/(number of states), providing a good tradeoff between computational burden and performance. The proposed decoder is also within 1.0 dB of other well-known suboptimal soft-out decoding techniques 相似文献
7.
Limited search trellis decoding of convolutional codes 总被引:1,自引:0,他引:1
Anderson J.B. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1989,35(5):944-955
The least storage and node computation required by a breadth-first tree or trellis decoder that corrects t errors over the binary symmetric channels is calculated. Breadth-first decoders work with code paths of the same length, without backtracking. The Viterbi algorithm is an exhaustive trellis decoder of this type; other schemes look at a subset of the tree or trellis paths. For random tree codes, theorems about the asymptotic number of paths required and their depth are proved. For concrete convolutional codes, the worst case storage for t error sequences is measured. In both cases the optimal decoder storage has the same simple dependence on t . The M algorithm and algorithms proposed by G.J. Foschini (ibid., vol.IT-23, p.605-9, Sept. 1977) and by S.J. Simmons (PhD. diss., Queens Univ., Kingston, Ont., Canada) are optimal, or nearly so; they are all far more efficient than the Viterbi algorithm 相似文献
8.
Sequential decoding is an attractive technique to achieve the reliability of communication promised by the channel coding theory. But, because it utilizes the Fano metric, its performance is sensitive to channel parameter variations and it cannot simultaneously minimize both decoding effort and probability of decoding error. Based on the distance properties of the codes, we have derived a new set of metric which not only can overcome the two drawbacks caused by the Fano metric but also can significantly reduce the decoding effort required by sequential decoding. 相似文献
9.
Low-complexity ML decoding for convolutional tail-biting codes 总被引:1,自引:0,他引:1
《Communications Letters, IEEE》2008,12(12):883-885
Recently, a maximum-likelihood (ML) decoding algorithm with two phases has been proposed for convolutional tailbiting codes [1]. The first phase applies the Viterbi algorithm to obtain the trellis information, and then the second phase employs the algorithm A* to find the ML solution. In this work, we improve the complexity of the algorithm A* by using a new evaluation function. Simulations showed that the improved A* algorithm has over 5 times less average decoding complexity in the second phase when Eb/N0? 4 dB. 相似文献
10.
Joint source-channel decoding of variable-length codes for convolutional codes and turbo codes 总被引:2,自引:0,他引:2
Several recent publications have shown that joint source-channel decoding could be a powerful technique to take advantage of residual source redundancy for fixed- and variable-length source codes. This letter gives an in-depth analysis of a low-complexity method recently proposed by Guivarch et al., where the redundancy left by a Huffman encoder is used at a bit level in the channel decoder to improve its performance. Several simulation results are presented, showing for two first-order Markov sources of different sizes that using a priori knowledge of the source statistics yields a significant improvement, either with a Viterbi channel decoder or with a turbo decoder. 相似文献
11.
This article presents techniques for improving the distribution of the number of stack entries, for stack sequential decoding over a hard quantized channel, with emphasis on high rate codes. It is shown that, for a class of high rate b/(b+1) codes, a table-based true high rate approach can be easily implemented for obtaining a decoding advantage over the punctured approach. Modified algorithms, which significantly improve the distribution of the number of stack entries and decoding time, are proposed for rate 1/N codes and high rate b/(b+1) codes 相似文献
12.
Iterative decoding of binary block and convolutional codes 总被引:35,自引:0,他引:35
Hagenauer J. Offer E. Papke L. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1996,42(2):429-445
Iterative decoding of two-dimensional systematic convolutional codes has been termed “turbo” (de)coding. Using log-likelihood algebra, we show that any decoder can be used which accepts soft inputs-including a priori values-and delivers soft outputs that can be split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the log-likelihood domain are given not only for convolutional codes but also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 2/3. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of 10-4 the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations 相似文献
13.
Amat A.Gi. Montorsi G. Benedetto S. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2004,50(5):867-881
This correspondence deals with the design and decoding of high-rate convolutional codes. After proving that every (n,n-1) convolutional code can be reduced to a structure that concatenates a block encoder associated to the parallel edges with a convolutional encoder defining the trellis section, the results of an exhaustive search for the optimal (n,n-1) convolutional codes is presented through various tables of best high-rate codes. The search is also extended to find the "best" recursive systematic convolutional encoders to be used as component encoders of parallel concatenated "turbo" codes. A decoding algorithm working on the dual code is introduced (in both multiplicative and additive form), by showing that changing in a proper way the representation of the soft information passed between constituent decoders in the iterative decoding process, the soft-input soft-output (SISO) modules of the decoder based on the dual code become equal to those used for the original code. A new technique to terminate the code trellis that significantly reduces the rate loss induced by the addition of terminating bits is described. Finally, an inverse puncturing technique applied to the highest rate "mother" code to yield a sequence of almost optimal codes with decreasing rates is proposed. Simulation results applied to the case of parallel concatenated codes show the significant advantages of the newly found codes in terms of performance and decoding complexity. 相似文献
14.
The performance of iterative decoding and demodulation of serially concatenated convolutional codes and minimum shift keying (MSK) is studied. It is shown that by appropriately combining the trellises of MSK and the inner code, a high performance coded modulation system can be achieved. Simulation results also confirms that recursive inner codes are essential for a serially concatenated system 相似文献
15.
Bidirectional suboptimal breadth-first decoding of convolutional codes is an attractive technique for slowly varying and quasi-static fading channels as it restricts the extent of decoding errors due to correct path loss to very heavy noise or interference regions. This paper compares the performance of such a decoding scheme to the Viterbi algorithm over wideband TDMA indoor radio links where equalization and space diversity are also used to combat dispersive fading and cochannel interference. On the basis of equal computational complexity and equal decoding delay, suboptimal, breadth-first, bidirectional decoding of a long constraint length convolutional code is shown to be superior to Viterbi decoding of a shorter constraint length code. Furthermore, this advantage increases as the outage criterion (in terms of bit error rate) becomes more stringent which makes bidirectional decoding particularly attractive for data applications and makes channel coding a more attractive alternative to increasing the space diversity order at the receiver 相似文献
16.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1979,25(5):584-591
The long standing conjecture is established that, for a discrete memoryless channel, there exists a linear convolutional code with infinite constraint length such that therho th(rho geq 1) moment of the number ofF -hypotheses in the Fano sequential decoding algorithm is bounded, provided that the transmission rateR is less thanE_{0}( rho,r)/ rho , wherer(x) is a distribution over the channel input alphabet. A new concept of independence for a finite set of message sequences plays an essential role in averaging a product of likelihood ratios over an ensemble of code sequences in a code tree. A simpler version of the method can be applied to the proof of the conjecture for general tree codes. 相似文献
17.
In this paper, an adaptive decoding algorithm for convolutional codes, which is a modification of the Viterbi algorithm (VA) is presented. For a given code, the proposed algorithm yields nearly the same error performance as the VA while requiring a substantially smaller average number of computations. Unlike most of the other suboptimum algorithms, this algorithm is self-synchronizing. If the transmitted path is discarded, the adaptive Viterbi algorithm (AVA) can recover the state corresponding to the transmitted path after a few trellis depths. Using computer simulations over hard and soft 3-bit quantized additive white Gaussian noise channels, it is shown that codes with a constraint length K up to 11 can be used to improve the bit-error performance over the VA with K=7 while maintaining a similar average number of computations. Although a small variability of the computational effort is present with our algorithm, this variability is exponentially distributed, leading to a modest size of the input buffer and, hence, a small probability of overflow 相似文献
18.
19.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1980,26(3):272-277
A new proof is presented for the existence of block codes whose error probability under maximum likelihood decoding is bounded asymptotically by the random coding bound universally over all discrete memoryless channels. On the basis of this result, the existence of convolutional codes with universally optimum performance is shown. Furthermore the existence of block codes which attain the expurgated bound universally over all discrete memoryless channels is proved under the use of maximum likelihood decoding. 相似文献
20.
Marion Jeanne Christine Guillemot Thomas Guionnet Florelle Pauchet 《Signal, Image and Video Processing》2007,1(1):77-87
This paper addresses the problem of error-resilient decoding of bitstreams produced by the CABAC (context-based adaptive binary
arithmetic coding) algorithm used in the H.264 video coding standard. The paper describes a maximum a posteriori (MAP) estimation
algorithm improving the CABAC decoding performances in the presence of transmission errors. Methods improving the re-synchronization
and error detection capabilities of the decoder are then described. A variant of the CABAC algorithm supporting error detection
based on a forbidden interval is presented. The performances of the decoding algorithm are first assessed with theoretical
sources and by considering different binarization codes. They are compared against those obtained with Exp-Golomb codes and
with a transmission chain making use of an error-correcting code. The approach has been integrated in an H.264/MPEG-4 AVC
video coder and decoder. The PSNR gains obtained are discussed. 相似文献