共查询到20条相似文献,搜索用时 15 毫秒
1.
Riedel S. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1998,44(3):1176-1187
A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased for codes of rate greater than 1/2. The discussed algorithms fulfil all requirements for iterative (“turbo”) decoding schemes. Simulation results are presented for high-rate parallel concatenated convolutional codes (“turbo” codes) using an AWGN channel or a perfectly interleaved Rayleigh fading channel. It is shown that iterative decoding of high-rate codes results in high-gain, moderate-complexity coding 相似文献
2.
L. E. Nazarov I. V. Golovkin 《Journal of Communications Technology and Electronics》2007,52(10):1125-1132
Computational procedures for symbol-by-symbol reception of signal ensembles corresponding to binary high-rate convolutional codes and to turbo codes formed by these convolutional codes are described. It is shown that the developed procedures are based on an optimum symbol-by-symbol reception algorithm that uses the fast Walsh-Hadamard transform algorithm. 相似文献
3.
It is shown how an intermediate chord property in the Cooley-Tukey FFT algorithm over a finite field can be used in faster decoding of the Bose-Chaudhuri-Hocquenghen codes in the spectral domain. 相似文献
4.
Amat A.Gi. Montorsi G. Benedetto S. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2004,50(5):867-881
This correspondence deals with the design and decoding of high-rate convolutional codes. After proving that every (n,n-1) convolutional code can be reduced to a structure that concatenates a block encoder associated to the parallel edges with a convolutional encoder defining the trellis section, the results of an exhaustive search for the optimal (n,n-1) convolutional codes is presented through various tables of best high-rate codes. The search is also extended to find the "best" recursive systematic convolutional encoders to be used as component encoders of parallel concatenated "turbo" codes. A decoding algorithm working on the dual code is introduced (in both multiplicative and additive form), by showing that changing in a proper way the representation of the soft information passed between constituent decoders in the iterative decoding process, the soft-input soft-output (SISO) modules of the decoder based on the dual code become equal to those used for the original code. A new technique to terminate the code trellis that significantly reduces the rate loss induced by the addition of terminating bits is described. Finally, an inverse puncturing technique applied to the highest rate "mother" code to yield a sequence of almost optimal codes with decreasing rates is proposed. Simulation results applied to the case of parallel concatenated codes show the significant advantages of the newly found codes in terms of performance and decoding complexity. 相似文献
5.
Fast decoding algorithm for RS codes 总被引:1,自引:0,他引:1
Suming Ju Guangguo Bi 《Electronics letters》1997,33(17):1452-1453
The singularity of the syndrome matrix is used to determine the error locations. After each error location has been ascertained, the syndrome values are calculated by an iteration method and the orders of the syndrome matrix are reduced by one. With the syndrome values of iteration, the error values are easily evaluated 相似文献
6.
A decoding algorithm for finite-geometry LDPC codes 总被引:1,自引:0,他引:1
In this paper, we develop a new low-complexity algorithm to decode low-density parity-check (LDPC) codes. The developments are oriented specifically toward low-cost, yet effective, decoding of (high-rate) finite-geometry (FG) LDPC codes. The decoding procedure updates iteratively the hard-decision received vector in search of a valid codeword in the vector space. Only one bit is changed in each iteration, and the bit-selection criterion combines the number of failed checks and the reliability of the received bits. Prior knowledge of the signal amplitude and noise power is not required. An optional mechanism to avoid infinite loops in the search is also proposed. Our studies show that the algorithm achieves an appealing tradeoff between performance and complexity for FG-LDPC codes. 相似文献
7.
Efficient algorithm for decoding layered space-time codes 总被引:27,自引:0,他引:27
Layered space-time codes have been designed to exploit the capacity advantage of multiple antenna systems in Rayleigh fading environments. A new efficient decoding algorithm based on QR decomposition is presented, which requires only a fraction of the computational effort compared with the standard decoding algorithm requiring the multiple calculation of the pseudo inverse of the channel matrix 相似文献
8.
Hao Wang Hongwen Yang Dacheng Yang 《Communications Letters, IEEE》2006,10(3):186-188
In this letter, a new method for decoding turbo-like codes is proposed to simplify the hardware implementation of Log-MAP algorithm. In our method, the multivariable Jacobian logarithm in Log-MAP algorithm is actually concatenated by recursive 1D Jacobian logarithm units. Two new approximations of Log-MAP algorithm based on these 1D units are then presented, which have good approximated accuracy and is simple for hardware implementation. We further suggest a novel decoding scheme that its complexity is near the Max-Log-MAP while the performance is close to the Log-MAP algorithm. 相似文献
9.
为了提高量子稳定子码的译码速率,提出了一种基于校验矩阵的量子概率译码算法。通过选择具有最小量子权重的算子作为差错算子来减少译码出错概率,通过预先构造量子标准阵列来缩短译码时间。与已有的量子最大似然译码算法相比,该算法对简并码和非简并码采用统一的译码方式,从而提高了简并码的译码可靠性。此外,算法不需要预先寻找差错算子对应的向量空间的基,因此算法复杂度更小。 相似文献
10.
《Communications, IEEE Transactions on》1993,41(7):1036-1038
A technique for reducing the number of inversions in the time-domain decoding algorithm based on an algebraic decoder (Blahut's decoder) is introduced. It is proved that the modified algorithm is equivalent to the original one. The modified algorithm can be used in the universal Reed-Solomon decoder to decrease complexity 相似文献
11.
This paper considers the use of sequence maximum a posteriori (MAP) decoding of trellis codes. A MAP receiver can exploit any “residual redundancy” that may exist in the channel encoded signal in the form of memory and/or a nonuniform distribution, thereby providing enhanced performance over very noisy channels, relative to maximum likelihood (ML) decoding. The paper begins with a first-order two-state Markov model for the channel encoder input. A variety of different systems with different source parameters, different modulation schemes, and different encoder complexities are simulated. Sequence MAP decoding is shown to substantially improve performance under very noisy channel conditions for systems with low-to-moderate redundancy, with relative gain increasing as the rate increases. As a result, coding schemes with multidimensional constellations are shown to have higher MAP gains than comparable schemes with two-dimensional (2-D) constellations. The second part of the paper considers trellis encoding of the code-excited linear predictive (CELP) speech coder's line spectral parameters (LSPs) with four-dimensional (4-D) QPSK modulation. Two source LSP models are used. One assumes only intraframe correlation of LSPs while the second one models both intraframe and interframe correlation. MAP decoding gains (over ML decoding) as much as 4 dB are achieved. Also, a comparison between the conventionally designed codes and an I-Q QPSK scheme shows that the I-Q scheme achieves better performance even though the first (sampler) LSP model is used 相似文献
12.
Tight bounds for LDPC and LDGM codes under MAP decoding 总被引:1,自引:0,他引:1
Montanari A. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2005,51(9):3221-3246
A new method for analyzing low-density parity-check (LDPC) codes and low-density generator-matrix (LDGM) codes under bit maximum a posteriori probability (MAP) decoding is introduced. The method is based on a rigorous approach to spin glasses developed by Francesco Guerra. It allows one to construct lower bounds on the entropy of the transmitted message conditional to the received one. Based on heuristic statistical mechanics calculations, we conjecture such bounds to be tight. The result holds for standard irregular ensembles when used over binary-input output-symmetric (BIOS) channels. The method is first developed for Tanner-graph ensembles with Poisson left-degree distribution. It is then generalized to "multi-Poisson" graphs, and, by a completion procedure, to arbitrary degree distribution 相似文献
13.
An investigation into the turbo decoder is presented. It was carried out to determine whether the decoded bit sequence constitutes a (at least) local maximum in the likelihood between possible codewords in the composite code trellis. The answer was obtained experimentally using a modified iterative turbo decoder. Results show that the turbo decoder does not necessarily lead to a maximum likelihood sequence estimator being realised for the composite code. Moreover, finding a closer code in a Euclidean distance sense only degrades the bit error rate of the system 相似文献
14.
以Turbo码基本理论和算法为基础,依据无线信息传输的实际要求和Taylor级数的基本原理,提出了一种Turbo码的Taylor-Log-MAP高效译码算法。该算法对基本的Log-MAP算法中K运算利用Taylor级数进行展开,针对实际的信道需求对展开式进行截断,实现了Turbo码的最佳译码。与传统的对数域最大后验概率译码算法相比,该算法基本保持了优良的译码性能,同时避免了复杂的对数运算,减小了运算量。仿真结果表明,与现有的RS码性能相比,使用Turbo码可以获取5 dB的信噪比增益。 相似文献
15.
根据删余卷积码具有较低的译码复杂度这一特征,提出了一种适用于普通高码率卷积码的低复杂度译码方法。通过多项式生成矩阵表示法,推导了删余卷积码的等效多项式生成矩阵,给出了等效多项式生成矩阵的计算准则。在分析删余卷积码与相同码率普通卷积码的等效关系和区别的基础上,提出了高码率卷积码的删余等效并给出了计算高码率卷积码删余等效后原始码和删余矩阵的方法。以原始码和删余矩阵构成的删余等效结构为译码基础,实现了高码率卷积码的低复杂度译码,其译码复杂度与原始码相当。仿真结果表明,删余等效译码方法相对于正常译码方法,其性能损失很小。 相似文献
16.
Two step SOVA-based decoding algorithm for tailbiting codes 总被引:1,自引:0,他引:1
《Communications Letters, IEEE》2009,13(7):510-512
In this work we propose a novel decoding algorithm for tailbiting convolutional codes and evaluate its performance over different channels. The proposed method consists on a fixed two-step Viterbi decoding of the received data. In the first step, an estimation of the most likely state is performed based on a SOVA decoding. The second step consists of a conventional Viterbi decoding that employs the state estimated in the previous step as the initial and final states of the trellis. Simulations results show a performance close to that of maximum-likelihood decoding. 相似文献
17.
An improved decoding algorithm for finite-geometry LDPC codes 总被引:1,自引:0,他引:1
In this letter, an improved bit-flipping decoding algorithm for high-rate finite-geometry low-density parity-check (FG-LDPC) codes is proposed. Both improvement in performance and reduction in decoding delay are observed by flipping multiple bits in each iteration. Our studies show that the proposed algorithm achieves an appealing tradeoff between performance and complexity for FG-LDPC codes. 相似文献
18.
介绍了截尾卷积码的循环维特比译码算法和BCJR译码算法,以及在循环维特比算法基础上改进的环绕维特比译码算法和双向维特比算法,最后对各种译码算法的性能进行了仿真分析。 相似文献
19.
The maximum a posterioriprobability (MAP) algorithm is a trellis-based MAP decoding algorithm. It is the heart of turbo (or iterative) decoding that achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in a long decoding delay. For practical applications, this decoding algorithm must be simplifled and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variation's, such as log-MAP and max-log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bidirectional and parallel MAP decodings 相似文献
20.
The weight spectra of high-rate lpunctured convolutional codes are evaluated under the hypothesis of a low-rate structure. This interpretation yields results slightly different from those obtained when weight spectra are evaluated assuming a true high-rate structure for punctured codes. The search for long memory punctured codes is extended by providing new punctured codes of rates 4/5, 5/6, 6/7, and 7/8 with memories ranging from 9 to 19 相似文献