首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For high rate k/n convolutional codes (k/n > 0.5), a trellis based implementation of a posteriori probability (APP) decoders is less complex on the dual code trellis owing to its branch complexity (2n-k ) being lower than the code trellis (2k). The log scheme used for APP decoders is not attractive for practical implementation owing to heavy quantisation requirements. As an alternative, presented is an arc hyperbolic tangent (AHT) scheme for implementing the dual- APP decoder. The trellis based implementation of this AHT dual APP decoder is discussed and some fundamental differences between primal APP and dual APP decoders that have an effect on a quantised implementation are reported.  相似文献   

2.
The trellis of a finite Abelian group code is locally (i.e., trellis section by trellis section) related to the trellis of the corresponding dual group code which allows one to express the basic operations of the a posteriori probability (APP) decoding algorithm (defined on a single trellis section of the primal trellis) in terms of the corresponding dual trellis section. Using this local approach, any algorithm employing the same type of operations as the APP algorithm can, thus, be dualized, even if the global dual code does not exist (e.g., nongroup codes represented by a group trellis). Given this, the complexity advantage of the dual approach for high-rate codes can be generalized to a broader class of APP decoding algorithms, including suboptimum algorithms approximating the true APP, which may be more attractive in practical applications due to their reduced complexity. Moreover, the local approach opens the way for mixed approaches where the operations of the APP algorithm are not exclusively performed on the primal or dual trellis. This is inevitable if the code does not possess a trellis consisting solely of group trellis sections as, e.g., for certain terminated group or ring codes. The complexity reduction offered by applying dualization is evaluated. As examples, we give a dual implementation of a suboptimum APP decoding algorithm for tailbiting convolutional codes, as well as dual implementations of APP algorithms of the sliding-window type. Moreover, we evaluate their performance for decoding usual tailbiting codes or convolutional codes, respectively, as well as their performance as component decoders in iteratively decoded parallel concatenated schemes.  相似文献   

3.
A symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes applying reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased. All requirements for iterative decoding schemes are fulfilled. Since tail-biting convolutional codes are equivalent to quasi-cyclic block codes, the decoding algorithm for truncated or terminated convolutional codes is modified to obtain a soft-in/soft-out decoder for high-rate quasi-cyclic block codes which also uses the dual code because of complexity reasons. Additionally, quasi-cyclic block codes are investigated as component codes for parallel concatenation. Simulation results obtained by iterative decoding are compared with union bounds for maximum likelihood decoding. The results of a search for high-rate quasi-cyclic block codes are given in the appendix  相似文献   

4.
This correspondence deals with the design and decoding of high-rate convolutional codes. After proving that every (n,n-1) convolutional code can be reduced to a structure that concatenates a block encoder associated to the parallel edges with a convolutional encoder defining the trellis section, the results of an exhaustive search for the optimal (n,n-1) convolutional codes is presented through various tables of best high-rate codes. The search is also extended to find the "best" recursive systematic convolutional encoders to be used as component encoders of parallel concatenated "turbo" codes. A decoding algorithm working on the dual code is introduced (in both multiplicative and additive form), by showing that changing in a proper way the representation of the soft information passed between constituent decoders in the iterative decoding process, the soft-input soft-output (SISO) modules of the decoder based on the dual code become equal to those used for the original code. A new technique to terminate the code trellis that significantly reduces the rate loss induced by the addition of terminating bits is described. Finally, an inverse puncturing technique applied to the highest rate "mother" code to yield a sequence of almost optimal codes with decreasing rates is proposed. Simulation results applied to the case of parallel concatenated codes show the significant advantages of the newly found codes in terms of performance and decoding complexity.  相似文献   

5.
Symbol-by-symbol maximum a posteriori (MAP) decoding algorithms for nonbinary block and convolutional codes over an extension field GF(p a) are presented. Equivalent MAP decoding rules employing the dual code are given which are computationally more efficient for high-rate codes. It is shown that these algorithms meet all requirements needed for iterative decoding as the output of the decoder can be split into three independent estimates: soft channel value, a priori term and extrinsic value. The discussed algorithms are then applied to a parallel concatenated coding scheme with nonbinary component codes in conjunction with orthogonal signaling  相似文献   

6.
This letter considers high-rate block turbo codes (BTC) obtained by concatenation of two single-error-correcting Reed-Solomon (RS) constituent codes. Simulation results show that these codes perform within 1 dB of the theoretical limit for binary transmission over additive white Gaussian noise with a low-complexity decoder. A comparison with Bose-Chaudhuri-Hocquenghem BTCs of similar code rate reveals that RS BTCs have interesting advantages in terms of memory size and decoder complexity for very-high-data-rate decoding architectures.  相似文献   

7.
为了进一步降低低密度奇偶校验(LDPC)码译码算法的复杂度,基于经典置信传播(BP)译码算法,给出了对数域迭代后验概率对数似然比(APP LLR)算法。通过概率域的和积算法(SPA)和对数域的迭代APP LLR算法的性能仿真及分析可见,迭代APP LLR算法能以较小的性能损失换取复杂度的大幅降低。进一步选用迭代APP LLR算法,结合不同地形条件下的VHF频段信道模型,仿真了LDPC码编译码系统的性能。理论分析及仿真结果均表明,基于迭代APP LLR算法的LDPC码,实现简单,性能优异,具有良好的工程应用前景。  相似文献   

8.
A new symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes using reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased for codes of rate greater than 1/2. The discussed algorithms fulfil all requirements for iterative (“turbo”) decoding schemes. Simulation results are presented for high-rate parallel concatenated convolutional codes (“turbo” codes) using an AWGN channel or a perfectly interleaved Rayleigh fading channel. It is shown that iterative decoding of high-rate codes results in high-gain, moderate-complexity coding  相似文献   

9.
We present a bandwidth-efficient channel coding scheme that has an overall structure similar to binary turbo codes, but employs trellis-coded modulation (TCM) codes (including multidimensional codes) as component codes. The combination of turbo codes with powerful bandwidth-efficient component codes leads to a straightforward encoder structure, and allows iterative decoding in analogy to the binary turbo decoder. However, certain special conditions may need to be met at the encoder, and the iterative decoder needs to be adapted to the decoding of the component TCM codes. The scheme has been investigated for 8-PSK, 16-QAM, and 64-QAM modulation schemes with varying overall bandwidth efficiencies. A simple code choice based on the minimal distance of the punctured component code has also been performed. The interset distances of the partitioning tree can be used to fix the number of coded and uncoded bits. We derive the symbol-by-symbol MAP component decoder operating in the log domain, and apply methods of reducing decoder complexity. Simulation results are presented and compare the scheme with traditional TCM as well as turbo codes with Gray mapping. The results show that the novel scheme is very powerful, yet of modest complexity since simple component codes are used  相似文献   

10.
Use of the Viterbi decoder to decode the (63, 57) Hamming code is considered, Implementation and performance of systematic and nonsystematic codes are addressed. It is shown that a Viterbi decoder for the constraint length seven, rate-½ convolutional code can be used to decode both systematic and nonsystematic (63, 57) Hamming codes, but an additional step is needed to complete the decoding of the systematic code. Bounds and simulation results for postdecoding bit-error probability are given and it is shown that the systematic code performs 0.4 dB better than the nonsystematic code. A heuristic explanation is provided  相似文献   

11.
A study of reduced complexity concatenated coding schemes, for commercial digital satellite systems with low-cost earth terminals, is reported. The study explored trade-offs between coding gain, overall rate and decoder complexity, and compared concatenated schemes with single codes. It concentrated on short block and constraint length inner codes, with soft decision decoding, concatenated with a range of Reed-Solomon outer codes. The dimension of the inner code was matched to the outer code symbol size, and appropriate interleaving between the inner and outer codes was used. Very useful coding gains were achieved with relatively high-rate, low-complexity schemes. For example, concatenating the soft decision decoded (9,8) single parity check inner code with the CCSDS recommended standard Reed-Solomon outer code gives a coding gain of 4.8dB at a bit error probability of 10?5, with an overall rate of 0-78.  相似文献   

12.
范雷  王琳  肖旻 《电子工程师》2006,32(8):21-24
LDPC(低密度奇偶校验码)是一种优秀的线性分组码,是目前距香农限最近的一类纠错编码。与Turbo码相比,LDPC码能得到更高的译码速度和更好的误码率性能,从而被认为是下一代通信系统和磁盘存储系统中备选的纠错编码。简要介绍了适于硬件实现的LDPC码译码算法,并基于软判决译码规则,使用Verilog硬件描述语言,在X ilinx V irtex2 6000 FPGA上实现了码率为1/2、帧长504bit的非规则LDPC码译码器。  相似文献   

13.
We introduce new techniques for quantization over noisy channels with intersymbol interference. We focus on the decoding problem, and present a decoder structure that allows the decoding to be based on soft minimum mean square-error estimates of the transmitted bits. The new bit-estimate based decoder provides a structured lower-complexity approximation of optimal decoding for general codebooks, and for so-called linear mapping codebooks, it is shown that its implementation becomes particularly simple. We investigate decoding based on optimal bit-estimates, and on suboptimal estimates of lower computational complexity. We also consider encoder optimization and combined source-channel code design. Numerical simulations demonstrate that bit-estimate based decoding is able to outperform a two-stage decision-based approach implemented using Viterbi sequence detection plus table look-up source decoding. The simulations also show that decoding based on suboptimal bit-estimates performs well, at a considerably lowered complexity  相似文献   

14.
A Bidirectional Efficient Algorithm for Searching code Trees (BEAST) is proposed for efficient soft-output decoding of block codes and concatenated block codes. BEAST operates on trees corresponding to the minimal trellis of a block code and finds a list of the most probable codewords. The complexity of the BEAST search is significantly lower than the complexity of trellis-based algorithms, such as the Viterbi algorithm and its list generalizations. The outputs of BEAST, a list of best codewords and their metrics, are used to obtain approximate a posteriori probabilities (APPs) of the transmitted symbols, yielding a soft-input soft-output (SISO) symbol decoder referred to as the BEAST-APP decoder. This decoder is employed as a component decoder in iterative schemes for decoding of product and incomplete product codes. Its performance and convergence behavior are investigated using extrinsic information transfer (EXIT) charts and compared to existing decoding schemes. It is shown that the BEAST-APP decoder achieves performances close to the Bahl–Cocke–Jelinek–Raviv (BCJR) decoder with a substantially lower computational complexity.   相似文献   

15.
We present a novel symbol-based soft-input a posteriori probability (APP) decoder for packetized variable-length encoded source indexes transmitted over wireless channels where the residual redundancy after source encoding is exploited for error protection. In combination with a mean-square or maximum APP estimation of the reconstructed source data, the whole decoding process is close to optimal. Furthermore, solutions for the proposed APP decoder with reduced complexity are discussed and compared to the near-optimal solution. When, in addition, channel codes are employed for protecting the variable-length encoded data, an iterative source-channel decoder can be obtained in the same way as for serially concatenated codes, where the proposed APP source decoder then represents one of the two constituent decoders. The simulation results show that this iterative decoding technique leads to substantial error protection for variable-length encoded correlated source signals, especially, when they are transmitted over highly corrupted channels.  相似文献   

16.
This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of the IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VLSI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered  相似文献   

17.
A maximum a posteriori (MAP) probability decoder of a block code minimizes the probability of error for each transmitted symbol separately. The standard way of implementing MAP decoding of a linear code is the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm, which is based on a trellis representation of the code. The complexity of the BCJR algorithm for the first-order Reed-Muller (RM-1) codes and Hamming codes is proportional to n/sup 2/, where n is the code's length. In this correspondence, we present new MAP decoding algorithms for binary and nonbinary RM-1 and Hamming codes. The proposed algorithms have complexities proportional to q/sup 2/n log/sub q/n, where q is the alphabet size. In particular, for the binary codes this yields complexity of order n log n.  相似文献   

18.
Turbo乘积码(TPC)作为一种高码率编码在带限通信系统中有着广泛的应用,但是大多数TPC译码器存在结构复杂、资源消耗高、处理时延大的问题.为此,提出了一种交错并行流水线处理结构的译码器,并通过译码过程中测试序列的合理排序以及使用相关运算代替最小欧式距离计算等算法优化设计,简化了译码器的实现复杂度,现场可编程门阵列(FPGA)资源消耗相比传统设计降低了35%,提高了译码速度.在Xilinx公司的FPGA芯片XC5VSX95T上完成了译码器的硬件实现,达到80 Mbit/s的译码速度,通过增加子译码器个数还可进一步提升译码吞吐率.  相似文献   

19.
Concatenated coding schemes consist of the combination of two or more simple constituent encoders and interleavers. The parallel concatenation known as “turbo code” has been shown to yield remarkable coding gains close to theoretical limits, yet admitting a relatively simple iterative decoding technique. The recently proposed serial concatenation of interleaved codes may offer superior performance to that of turbo codes. In both coding schemes, the core of the iterative decoding structure is a soft-input soft-output (SISO) a posteriori probability (APP) module. In this letter, we describe the SISO APP module that updates the APP's corresponding to the input and the output bits, of a code, and show how to embed it into an iterative decoder for a new hybrid concatenation of three codes, to fully exploit the benefits of the proposed SISO APP module  相似文献   

20.
迭代检测技术不仅局限于在传统的级联码系统中的应用,还可用于解决现代数字通信中的许多检测/译码问题。随着Turbo码的出现,人们对迭代译码算法进行深入研究,并提出一些简化译码算法。比特交织编码调制及迭代检测(bit—interleaved coded modulation with iterative decoding,BICMID)是一种高效数据传输系统。比特交织和迭代译码是BICM—ID系统具有卓越性能的关键因素,译码算法的选择不仅影响接收机的性能,也决定了系统的复杂度。文中研究迭代译码算法对BICM—ID系统性能的影响,分析各种译码算法的计算复杂度。仿真结果表明log-APP算法有好的性能同时复杂度也高,简化的译码算法能降低译码器的复杂度,但会带来一定的性能损失;随着信道条件的改善,算法简化带来的性能损失也随之减小。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号