首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Generalized minimum-distance (GMD) decoding is a standard soft-decoding method for block codes. We derive an efficient general GMD decoding scheme for linear block codes in the framework of error-correcting pairs. Special attention is paid to Reed-Solomon (RS) codes and one-point algebraic-geometry (AG) codes. For RS codes of length n and minimum Hamming distance d the GMD decoding complexity turns out to be in the order O(nd), where the complexity is counted as the number of multiplications in the field of concern. For AG codes the GMD decoding complexity is highly dependent on the curve in consideration. It is shown that we can find all relevant error-erasure-locating functions with complexity O(o1nd), where o1 is the size of the first nongap in the function space associated with the code. A full GMD decoding procedure for a one-point AG code can be performed with complexity O(dn2)  相似文献   

2.
This article presents techniques for improving the distribution of the number of stack entries, for stack sequential decoding over a hard quantized channel, with emphasis on high rate codes. It is shown that, for a class of high rate b/(b+1) codes, a table-based true high rate approach can be easily implemented for obtaining a decoding advantage over the punctured approach. Modified algorithms, which significantly improve the distribution of the number of stack entries and decoding time, are proposed for rate 1/N codes and high rate b/(b+1) codes  相似文献   

3.
Turbo码的一种并行译码方案及相应的并行结构交织器研究   总被引:1,自引:0,他引:1  
Turbo码基于MAP算法译码的递推计算所引入高的译码延迟限制了Turbo码在高速率数据传输中的应用。为了解决这个问题,该文提供了一种降低译码延迟的并行译码方法。并行处理方案的实现必须通过适当的交织以避免两个译码器对外信息读写的数据冲突。该文在分析了任意无冲突交织方式可能性的存在之后,给出了设计任意地适用于并行处理方案的S随机交织器的方法。仿真验证了并行译码方案的误比特性能。  相似文献   

4.
The decoding error probability of codes is studied as a function of their block length. It is shown that the existence of codes with a polynomially small decoding error probability implies the existence of codes with an exponentially small decoding error probability. Specifically, it is assumed that there exists a family of codes of length N and rate R=(1-epsiv)C (C is a capacity of a binary-symmetric channel), whose decoding probability decreases inverse polynomially in N. It is shown that if the decoding probability decreases sufficiently fast, but still only inverse polynomially fast in N, then there exists another such family of codes whose decoding error probability decreases exponentially fast in N. Moreover, if the decoding time complexity of the assumed family of codes is polynomial in N and 1/epsiv, then the decoding time complexity of the presented family is linear in N and polynomial in 1/epsiv. These codes are compared to the recently presented codes of Barg and Zemor, "Error Exponents of Expander Codes", IEEE Transactions on Information Theory, 2002, and "Concatenated Codes: Serial and Parallel", IEEE Transactions on Information Theory, 2005. It is shown that the latter families cannot be tuned to have exponentially decaying (in N) error probability, and at the same time to have decoding time complexity linear in N and polynomial in 1/epsiv  相似文献   

5.
Multistage decoding of multilevel block multilevel phase-shift keying (M-PSK) modulation codes for the additive white Gaussian noise (AWGN) channel is investigated. Several types of multistage decoding, including a suboptimum soft-decision decoding scheme, are devised and analyzed. Upper bounds on the probability of an incorrect decoding of a code are derived for the proposed multistage decoding schemes. Error probabilities of some specific multilevel block 8-PSK modulation codes are evaluated and simulated. The computation and simulation results for these codes show that with multistage decoding, significant coding gains can be achieved with large reduction in decoding complexity. In one example, it is shown that the difference in performance between the proposed suboptimum multistage soft-decision decoding and the single-stage optimum decoding is small, only a fraction of a dB loss in SNR at the block error probability of 10-6  相似文献   

6.
Low-density parity-check (LDPC) codes and convolutional Turbo codes are two of the most powerful error correcting codes that are widely used in modern communication systems. In a multi-mode baseband receiver, both LDPC and Turbo decoders may be required. However, the different decoding approaches for LDPC and Turbo codes usually lead to different hardware architectures. In this paper we propose a unified message passing algorithm for LDPC and Turbo codes and introduce a flexible soft-input soft-output (SISO) module to handle LDPC/Turbo decoding. We employ the trellis-based maximum a posteriori (MAP) algorithm as a bridge between LDPC and Turbo codes decoding. We view the LDPC code as a concatenation of n super-codes where each super-code has a simpler trellis structure so that the MAP algorithm can be easily applied to it. We propose a flexible functional unit (FFU) for MAP processing of LDPC and Turbo codes with a low hardware overhead (about 15% area and timing overhead). Based on the FFU, we propose an area-efficient flexible SISO decoder architecture to support LDPC/Turbo codes decoding. Multiple such SISO modules can be embedded into a parallel decoder for higher decoding throughput. As a case study, a flexible LDPC/Turbo decoder has been synthesized on a TSMC 90 nm CMOS technology with a core area of 3.2 mm2. The decoder can support IEEE 802.16e LDPC codes, IEEE 802.11n LDPC codes, and 3GPP LTE Turbo codes. Running at 500 MHz clock frequency, the decoder can sustain up to 600 Mbps LDPC decoding or 450 Mbps Turbo decoding.  相似文献   

7.
Group codes generated by finite reflection groups   总被引:1,自引:0,他引:1  
Slepian-type group codes generated by finite Coxeter groups are considered. The resulting class of group codes is a generalization of the well-known permutation modulation codes of Slepian (1965), it is shown that a restricted initial-point problem for these codes has a canonical solution that can easily be computed. This allows one to enumerate all optimal group codes in this restricted sense and essentially solves the initial-point problem for all finite reflection groups. Formulas for the cardinality and the minimum distance of such codes are given. The new optimal group codes from exceptional reflection groups that are obtained achieve high rates and have excellent distance properties. The decoding regions for maximum-likelihood (ML) decoding are explicitly characterized and an efficient ML-decoding algorithm is presented. This algorithm relies on an extension of Slepian's decoding of permutation modulation and has similar low complexity,  相似文献   

8.
New array codes for multiple phased burst correction   总被引:6,自引:0,他引:6  
An optimal family of array codes over GF(q) for correcting multiple phased burst errors and erasures, where each phased burst corresponds to an erroneous or erased column in a code array, is introduced. As for erasures, these array codes have an efficient decoding algorithm which avoids multiplications (or divisions) over extension fields, replacing these operations with cyclic shifts of vectors over GF(q). The erasure decoding algorithm can be adapted easily to handle single column errors as well. The codes are characterized geometrically by means of parity constraints along certain diagonal lines in each code array, thus generalizing a previously known construction for the special case of two erasures. Algebraically, they can be interpreted as Reed-Solomon codes. When q is primitive in GF(q), the resulting codes become (conventional) Reed-Solomon codes of length P over GF(qp-1), in which case the new erasure decoding technique can be incorporated into the Berlekamp-Massey algorithm, yielding a faster way to compute the values of any prescribed number of errors  相似文献   

9.
The set of all even subgraphs of a connected graph G on p vertices with q edges forms a binary linear code C=CE(G) with parameters [q,q-p+1,g], where g is the girth of G. Such codes were studied systematically by Bredeson and Hakimi (1967) and Hakimi and Bredeson (1968) who were concerned with the problems of augmenting C to a larger [q,k,g]-code and of efficiently decoding such augmented graphical codes. We give a new approach to these problems by requiring the augmented codes to be graphical. On one hand, we present two construction methods which turn out to contain the methods proposed by Hakimi and Bredeson as special cases. As we show, this not only gives a better understanding of their construction, it also results in augmenting codes of larger dimension. We look at the case of 1-error-correcting graphical codes in some detail. In particular, we show how to obtain the extended Hamming codes as “purely” graphical codes by our approach. On the other hand, we follow a suggestion of Ntafos and Hakimi (1981) and use techniques from combinatorial optimization to give decoding procedures for graphical codes which turn out to be considerably more efficient than the approach via majority logic decoding proposed by Bredeson and Hakimi. We also consider the decoding problem for the even graphical code based on the complete graph K2n in more detail: we discuss an efficient hardware implementation of an encoding/decoding scheme for these codes and show that things may be arranged in such a way that one can also correct all adjacent double errors. Finally, we discuss nonlinear graphical codes  相似文献   

10.
It is shown that Reed-Solomon (RS) codes can be decoded by using a fast Fourier transform (FFT) algorithm over finite fieldsGF(F_{n}), whereF_{n}is a Fermat prime, and continued fractions. This new transform decoding method is simpler than the standard method for RS codes. The computing time of this new decoding algorithm in software can be faster than the standard decoding method for RS codes.  相似文献   

11.
This paper presents a two-stage turbo-coding scheme for Reed-Solomon (RS) codes through binary decomposition and self-concatenation. In this scheme, the binary image of an RS code over GF(2/sup m/) is first decomposed into a set of binary component codes with relatively small trellis complexities. Then the RS code is formatted as a self-concatenated code with itself as the outer code and the binary component codes as the inner codes in a turbo-coding arrangement. In decoding, the inner codes are decoded with turbo decoding and the outer code is decoded with either an algebraic decoding algorithm or a reliability-based decoding algorithm. The outer and inner decoders interact during each decoding iteration. For RS codes of lengths up to 255, the proposed two-stage coding scheme is practically implementable and provides a significant coding gain over conventional algebraic and reliability-based decoding algorithms.  相似文献   

12.
The performance of punctured low-definition parity-check (LDPC) codes under maximum-likelihood (ML) decoding is studied in this correspondence via deriving and analyzing their average weight distributions (AWDs) and the corresponding asymptotic growth rate of the AWDs. In particular, it is proved that capacity-achieving codes of any rate and for any memoryless binary-input output-symmetric (MBIOS) channel under ML decoding can be constructed by puncturing some original LDPC code with small enough rate. Moreover, it is shown that the gap to capacity of all the punctured codes can be the same as the original code with a small enough rate. Conditions under which puncturing results in no rate loss with asymptotically high probability are also given in the process. These results show high potential for puncturing to be used in designing capacity-achieving codes, and in rate-compatible coding under any MBIOS channel.   相似文献   

13.
极化码作为一种纠错码,具有较好的编译码性能,已成为5G短码控制信道的标准编码方案.但在码长较短时,其性能不够优异.作为一种新型级联极化码,奇偶校验码与极化码的级联方案提高了有限码长的性能,但是其译码算法有着较高的复杂度.该文针对这一问题,提出一种基于奇偶校验码级联极化码的串行抵消局部列表译码(PC-PSCL)算法,该算...  相似文献   

14.
800Mbps准循环LDPC码译码器的FPGA实现   总被引:1,自引:0,他引:1  
张仲明  许拔  杨军  张尔扬 《信号处理》2010,26(2):255-261
本文提出了一种适用于准循环低密度校验码的低复杂度的高并行度译码器架构。通常准循环低密度校验码不适于设计有效的高并行度高吞吐量译码器。我们通过利用准循环低密度校验码的奇偶校验矩阵的结构特点,将其转化为块准循环结构,从而能够并行化处理译码算法的行与列操作。使用这个架构,我们在Xilinx Virtex-5 LX330 FPGA上实现了(8176,7154)有限几何LDPC码的译码器,在15次迭代的条件下其译码吞吐量达到800Mbps。   相似文献   

15.
Rateless coding has recently been the focus of much practical as well as theoretical research. In this paper, rateless codes are shown to find a natural application in channels where the channel law varies unpredictably. Such unpredictability means that to ensure reliable communication block codes are limited by worst case channel variations. However, the dynamic decoding nature of rateless codes allows them to adapt opportunistically to channel variations. If the channel state selector is not malicious, but also not predictable, decoding can occur earlier, producing a rate of communication that can be much higher than the worst case. The application of rateless or “fountain” codes to the binary erasure channel (BEC) can be understood as an application of these ideas. Further, this sort of decoding can be usefully understood as an incremental form of erasure decoding. The use of ideas of erasure decoding result in a significant increase in reliability.   相似文献   

16.
Symbol-by-symbol maximum a posteriori (MAP) decoding algorithms for nonbinary block and convolutional codes over an extension field GF(p a) are presented. Equivalent MAP decoding rules employing the dual code are given which are computationally more efficient for high-rate codes. It is shown that these algorithms meet all requirements needed for iterative decoding as the output of the decoder can be split into three independent estimates: soft channel value, a priori term and extrinsic value. The discussed algorithms are then applied to a parallel concatenated coding scheme with nonbinary component codes in conjunction with orthogonal signaling  相似文献   

17.
Accumulate-Repeat-Accumulate Codes   总被引:1,自引:0,他引:1  
In this paper, we propose an innovative channel coding scheme called accumulate-repeat-accumulate (ARA) codes. This class of codes can be viewed as serial turbo-like codes or as a subclass of low-density parity check (LDPC) codes, and they have a projected graph or protograph representation; this allows for high-speed iterative decoding implementation using belief propagation. An ARA code can be viewed as precoded repeat accumulate (RA) code with puncturing or as precoded irregular repeat accumulate (IRA) code, where simply an accumulator is chosen as the precoder. The amount of performance improvement due to the precoder will be called precoding gain. Using density evolution on their associated protographs, we find some rate-1/2 ARA codes, with a maximum variable node degree of 5 for which a minimum bit SNR as low as 0.08 dB from channel capacity threshold is achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA, IRA, or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore, by puncturing the inner accumulator, we can construct families of higher rate ARA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results are provided and compared with turbo codes. In addition to iterative decoding analysis, we analyzed the performance of ARA codes with maximum-likelihood (ML) decoding. By obtaining the weight distribution of these codes and through existing tightest bounds we have shown that the ML SNR threshold of ARA codes also approaches very closely to that of random codes. These codes have better interleaving gain than turbo codes  相似文献   

18.
A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased for codes of rate greater than 1/2. The discussed algorithms fulfil all requirements for iterative (“turbo”) decoding schemes. Simulation results are presented for high-rate parallel concatenated convolutional codes (“turbo” codes) using an AWGN channel or a perfectly interleaved Rayleigh fading channel. It is shown that iterative decoding of high-rate codes results in high-gain, moderate-complexity coding  相似文献   

19.
In this correspondence, the bit-error probability Pb for maximum-likelihood decoding of binary linear block codes is investigated. The contribution Pb(j) of each information bit j to Pb is considered and an upper bound on Pb(j) is derived. For randomly generated codes, it is shown that the conventional approximation at high SNR Pb≈(dH/N).Ps, where Ps represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum Pb when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit-error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for soft-decision decoding methods which require a generator matrix with a particular structure such as trellis decoding, multistage decoding, or algebraic-based soft-decision decoding, equivalent schemes that reduce the bit-error probability are discussed. Although the gains achieved at practical bit-error rates are only a fraction of a decibel, they remain meaningful as they are of the same orders as the error performance differences between optimum and suboptimum decodings. Most importantly, these gains are free as they are achieved with no or little additional circuitry which is transparent to the conventional implementation  相似文献   

20.
Although Low-Density Parity-Check (LDPC) codes perform admirably for large block sizes — being mostly resilient to low levels of channel SNR and errors in channel equalization — real time operation and low computational effort require small and medium sized codes, which tend to be affected by these two factors. For these small to medium codes, a method for designing efficient regular codes is presented and a new technique for reducing the dependency of correct channel equalization, without much change in the inner workings or architecture of existing LDPC decoders is proposed. This goal is achieved by an improved intrinsic Log-Likelihood Ratio (LLR) estimator in the LDPC decoder — the ILE-Decoder, which only uses LDPC decoder-side information gathered during standard LDPC decoding. This information is used to improve the channel parameters estimation, thus improving the reliability of the code correction, while reducing the number of required iterations for a successful decoding. Methods for fast encoding and decoding of LDPC codes are presented, highlighting the importance of assuring low encoding/decoding latency with maintaining high throughput. The assumptions and rules that govern the estimation process via subcarrier corrected-bit accounting are presented, and the Bayesian inference estimation process is detailed. This scheme is suitable for application to multicarrier communications, such as OFDM. Simulation results in a PLC-like environment that confirm the good performance of the proposed LDPC coder/decoder are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号