首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Probabilistic algorithms are given for constructing good large constraint length trellis codes for use with sequential decoding that can achieve the channel cutoff rate bound at a bit error rate (BER) of 10-5-10-6. The algorithms are motivated by the random coding principle that an arbitrary selection of code symbols will produce a good code with high probability. One algorithm begins by choosing a relatively small set of codes randomly. The error performance of each of these codes is evaluated using sequential decoding and the code with the best performance among the chosen set is retained. Another algorithm treats the code construction as a combinatorial optimization problem and uses simulated annealing to direct the code search. Trellis codes for 8 PSK and 16 QAM constellations with constraint lengths v up to 20 are obtained. Simulation results with sequential decoding show that these codes reach the channel cutoff rate bound at a BER of 10-5-10-6 and achieve 5.0-6.35 dB real coding gains over uncoded systems with the same spectral efficiency and up to 2.0 dB real coding gains over 64 state trellis codes using Viterbi decoding  相似文献   

2.
This paper presents a two-stage turbo-coding scheme for Reed-Solomon (RS) codes through binary decomposition and self-concatenation. In this scheme, the binary image of an RS code over GF(2/sup m/) is first decomposed into a set of binary component codes with relatively small trellis complexities. Then the RS code is formatted as a self-concatenated code with itself as the outer code and the binary component codes as the inner codes in a turbo-coding arrangement. In decoding, the inner codes are decoded with turbo decoding and the outer code is decoded with either an algebraic decoding algorithm or a reliability-based decoding algorithm. The outer and inner decoders interact during each decoding iteration. For RS codes of lengths up to 255, the proposed two-stage coding scheme is practically implementable and provides a significant coding gain over conventional algebraic and reliability-based decoding algorithms.  相似文献   

3.
Erasure-free sequential decoding of trellis codes   总被引:1,自引:0,他引:1  
An erasure-free sequential decoding algorithm for trellis codes, called the buffer looking algorithm (BLA), is introduced. Several versions of the algorithm can be obtained by choosing certain parameters and selecting a resynchronization scheme. These can be categorized as block decoding or continuous decoding, depending on the resynchronization scheme. Block decoding is guaranteed to resynchronize at the beginning of each block, but suffers some rate loss when the block length is relatively short. The performance of a typical block decoding scheme is analyzed, and we show that significant coding gains over Viterbi decoding can be achieved with much less computational effort. A resynchronization scheme is proposed for continuous sequential decoding. It is shown by analysis and simulation that continuous sequential decoding using this scheme has a high probability of resynchronizing successfully. This new resynchronization scheme solves the rate loss problem resulting from block decoding. The channel cutoff rate, demodulator quantization, and the tail's influence on performance are also discussed. Although this paper considers only the decoding of trellis codes, the algorithm can also be applied to the decoding of convolutional codes  相似文献   

4.
The coding scheme uses a set of n convolutional codes multiplexed into an inner code and a (n,n-1) single-parity-check code serving as the outer code. Each of the inner convolutional codes is decoded independently, with maximum-likelihood decoding being achieved using n parallel implementations of the Viterbi algorithm. The Viterbi decoding is followed by additional outer soft-decision single-parity-check decoding. Considering n=12 and the set of short constraint length K=3, rate 1/2 convolutional codes, it is shown that the performance of the concatenated scheme is comparable to the performance of the constraint length K=7, rate 1/2 convolutional code with standard soft-decision Viterbi decoding. Simulation results are presented for the K=3, rate 1/2 as well as for the punctured K=3, rate 2/3 and rate 3/4 inner convolutional codes. The performance of the proposed concatenated scheme using a set of K=7, rate 1/2 inner convolutional codes is given  相似文献   

5.
We present a trellis-based maximum-likelihood soft-decision sequential decoding algorithm (MLSDA) for binary convolutional codes. Simulation results show that, for (2, 1, 6) and (2, 1, 16) codes antipodally transmitted over the AWGN channel, the average computational effort required by the algorithm is several orders of magnitude less than that of the Viterbi algorithm. Also shown via simulations upon the same system models is that, under moderate SNR, the algorithm is about four times faster than the conventional sequential decoding algorithm (i.e., stack algorithm with Fano metric) having comparable bit-error probability  相似文献   

6.
该文提出用Reed Solomon(RS)乘积码作为外码,卷积码作为内码的级联码方案并且内外码间用Congruential向量生成的交织图案对RS码符号进行重排列。对此级联码采用的迭代译码基于成员码的软译码算法。当迭代次数达到最大后,通过计算RS码的校正子,提出一种纠正残余错误的方法,进一步提高了系统的误比特性能。仿真结果表明,在AWGN信道中与迭代译码的级联RS/卷积码相比,当误比特率为1e-5时,新系统的编码增益大约有0.4 dB。  相似文献   

7.
Although sequential decoding of convolutional codes gives a very small decoding error probability, the overall reliability is limited by the probability PG of deficient decoding, the term introduced by Jelinek to refer to decoding failures caused mainly by buffer overflow. The number of computational efforts in sequential decoding has the Pareto distribution and it is this "heavy tailed" distribution that characterizes PG. The heavy tailed distribution appears in many fields and buffer overflow is a typical example of the behaviors in which the heavy tailed distribution plays an important role. In this paper, we give a new bound on a probability in the tail of the heavy tailed distribution and, using the bound, prove the long-standing conjecture on PG, that is, PG ap constanttimes1/(sigmarhoNrho-1) for a large speed factor sigma of the decoder and for a large receive buffer size N whenever the coding rate R and rho satisfy E(rho)=rhoR for 0 les rho les 1  相似文献   

8.
The loss in quantizing coded symbols in the additive white Gaussian noise (AWGN) channel with binary phase-shift keying (BPSK) or quadrature phase-shift keying (QPSK) modulation is discussed. A quantization scheme and branch metric calculation method are presented. For the uniformly quantized AWGN channel, cutoff rate is used to determine the step size and the smallest number of quantization bits needed for a given bit-signal-to-noise ratio (Eb/N0) loss. A nine-level quantizer is presented, along with 3-b branch metrics for a rate-1/2 code, which causes an Eb/N0 loss of only 0.14 dB. These results also apply to soft-decision decoding of block codes. A tight upper bound is derived for the range of path metrics in a Viterbi decoder. The calculations are verified by simulations of several convolutional codes, including the memory-14, rate-1/4 or -1/6 codes used by the big Viterbi decoders at JPL  相似文献   

9.
Majority-logic-like decoding is an outer concatenated code decoding technique using the structure of a binary majority logic code. It is shown that it is easy to adapt such a technique to handle the case where the decoder is given an ordered list of two or more prospective candidates for each inner code symbol. Large reductions in failure probability can be achieved. Simulation results are shown for both block and convolutional codes. Punctured convolutional codes allow a convenient flexibility of rate while retaining high decoding power. For example, a (856, 500) terminated convolutional code with an average of 180 random first-choice symbol errors can correct all the errors in a simple manner about 97% of the time, with the aid of second-choice values. A (856, 500) maximum-distance block code could correct only up to 178 errors based on guaranteed correction capability and would be extremely complex  相似文献   

10.
We present a method for soft-in/soft-out sequential decoding of recursive systematic convolutional codes. The proposed decoder, the twin-stack decoder, is an extension of the well-known ZJ stack decoder, and it uses two stacks. The use of the two stacks lends itself to the generation of soft outputs, and the decoder is easily incorporated into the iterative “turbo” configuration. Under thresholded decoding, it is observed that the decoder is capable of achieving near-maximum a posteriori bit-error rate performance at moderate to high signal-to-noise ratios (SNRs). Also, in the iterative (turbo) configuration, at moderate SNRs (above 2.0 dB), the performance of the proposed decoder is within 1.5 dB of the BCJR algorithm for a 16-state, R=1/3, recursive code, but this difference narrows progressively at higher SNRs. The complexity of the decoder asymptotically decreases (with SNR) as 1/(number of states), providing a good tradeoff between computational burden and performance. The proposed decoder is also within 1.0 dB of other well-known suboptimal soft-out decoding techniques  相似文献   

11.
Universal decoding procedures for finite-state channels are discussed. Although the channel statistics are not known, universal decoding can achieve an error probability with an error exponent that, for large enough block length (or constraint length in case of convolutional codes), is equal to the random-coding error exponent associated with the optimal maximum-likelihood decoding procedure for the given channel. The same approach is applied to sequential decoding, yielding a universal sequential decoding procedure with a cutoff rate and an error exponent that are equal to those achieved by the classical sequential decoding procedure.  相似文献   

12.
Source-controlled channel decoding   总被引:1,自引:0,他引:1  
  相似文献   

13.
The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both decoders perform repeated decoding trials and decoding information is exchanged between them  相似文献   

14.
This paper presents several results involving Fano's sequential decoding algorithm for convolutional codes. An upper bound to theath moment of decoder computation is obtained for arbitrary decoder biasBanda leq 1. An upper bound on error probability with sequential decoding is derived for both systematic and nonsystematic convolutional codes. This error bound involves the exact value of the decoder biasB. It is shown that there is a trade-off between sequential decoder computation and error probability as the biasBis varied. It is also shown that for many values ofB, sequential decoding of systematic convolutional codes gives an exponentially larger error probability than sequential decoding of nonsystematic convolutional codes when both codes are designed with exponentially equal optimum decoder error probabilities.  相似文献   

15.
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, oft-order failure (advance to depthtinto an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric biasGis arbitrary. Upper bounds on the Pareto exponent are also presented. TheG-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. TheG-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices ofGand SNR for a binary-input quantized-output Gaussian additive noise channel.  相似文献   

16.
In a coded cooperation scheme, the relay must decode and re-encode data. This process needs to be completed rapidly. Therefore, a simple channel coding/decoding scheme that requires low computational loads is needed. Reed–Solomon (RS) codes are simple, forward error-correction codes with low decoding computational loads. This paper introduces a three-user RS coded cooperation scheme that aims to have simple encoding/decoding complexity as well as to increase diversity order. It also presents the mathematical derivations of outage probability and investigates the outage performance of the three-user RS coded cooperation scheme. The derived outage probability expressions prove that the three-user RS coded cooperation scheme can achieve full diversity. Numerical bit error rate comparisons show that the three-user RS coded cooperation scheme performs better than a two-user scheme under various inter-user and uplink channel conditions. Outage probability performance improves at approximately 5 dB for regions with low signal-to-noise ratio (SNR) and 10 dB for regions with high SNR under a slow-fading channel. The paper also presents the complete calculated numerical tables for outage probability terms (integral terms) that do not have closed-form solutions.  相似文献   

17.
Low-density parity check codes over GF(q)   总被引:2,自引:0,他引:2  
Gallager's (1962) low-density binary parity check codes have been shown to have near-Shannon limit performance when decoded using a probabilistic decoding algorithm. We report the empirical results of error-correction using the analogous codes over GF(q) for q>2, with binary symmetric channels and binary Gaussian channels. We find a significant improvement over the performance of the binary codes, including a rate 1/4 code with bit error probability <10-5 at Eb/N0=0.2 dB  相似文献   

18.
A new analysis of the computational effort and the error probability of sequential decoding is presented, which is based entirely on the distance properties of a particular convolutional code and employs no random-coding arguments. An upper bound on the computational distributionP(C_{t}>N_{t})for a specific time-invariant code is derived, which decreases exponentially with the column distance of the code. It is proved that rapid column-distance growth minimizes the decoding effort and therefore also the probability of decoding failure or erasure. In an analogous way, the undetected error probability of sequential decoding with a particular fixed code is proved to decrease exponentially with the free distance and to increase linearly with the number of minimum free-weight codewords. This analysis proves that code construction for sequential decoding should maximize column-distance growth and free distance in order to guarantee fast decoding, a minimum erasure probability, and a low undetected error probability.  相似文献   

19.
The performance of a relatively simple two-dimensional (2-D) product code is considered. The row code is a short constraint length convolutional code, and the column code is a high-rate block code. Both the rows and columns are decoded with soft-decision maximum likelihood decoding. The soft output Viterbi algorithm (SOVA) is used to decode the rows. In one case, the same decoder may be used for the rows and the columns. It is shown that, depending on the rate of the row code, reliable signaling is achieved within about 1.0 to 1.5 dB of the R0 limit. Results are given for a particular impulsive noise channel; it is seen that performance is robust over a wide range of channel conditions  相似文献   

20.
In this letter, we present a novel product channel coding and decoding scheme for image transmission over noisy channels. Two convolutional codes with at least one recursive systematic convolutional code are employed to construct the product code. Received data are decoded alternately in two directions. A constrained Viterbi algorithm is proposed to exploit the detection results of cyclic redundancy check codes so that both reduction in error patterns and fast decoding speed are achieved. Experiments with image data coded by the algorithm of set partitioning in hierarchical trees exhibit results better than those currently reported in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号