首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
SISO decoding for block codes can be carried out based on a trellis representation of the code. However, the complexity entailed by such decoding is most often prohibitive and thus prevents practical implementation. This paper examines a new decoding scheme based on the soft-output Viterbi algorithm (SOVA) applied to a sectionalized trellis for linear block codes. The computational complexities of the new SOVA decoder and of the conventional SOVA decoder, based on a bit-level trellis, are theoretically analyzed and derived for different linear block codes. These results are used to obtain optimum sectionalizations of a trellis for SOVA. For comparisons, the optimum sectionalizations for Maximum A Posteriori (MAP) and Maximum Logarithm MAP (Max-Log-MAP) algorithms, and their corresponding computational complexities are included. The results confirm that the new SOVA decoder is the most computationally efficient SISO decoder, in comparisons to MAP and Max-Log-MAP algorithms. The simulation results of the bit error rate (BER) performance, assuming binary phase -- shift keying (BPSK) and additive white Gaussian noise (AWGN) channel, demonstrate that the performance of the new decoding scheme is not degraded. The BER performance of iterative SOVA decoding of serially concatenated block codes shows no difference in the quality of the soft outputs of the new decoding scheme and of the conventional SOVA.  相似文献   

2.
Martin  I. Honary  B. 《Electronics letters》2000,36(3):217-218
A novel code combining system based on Reed-Muller codes is presented. Because of their simple structure RM codes are simple to decode using a trellis based soft maximum likelihood decoder (SMLD). The decoder exploits the modular structure of the RM code to construct a set of nested trellises which minimise the complexity of the decoder by re-using the results of previous decoding attempts. A protocol utilising this technique to produce an efficient code combining ARQ-scheme is also introduced  相似文献   

3.
It is demonstrated how modulation schemes based on QPSK can be directly incorporated into QAM-based systems. It is argued that this leads directly to an easily implementable structure that is both efficient in bandwidth and data reliability. It is contended that the correct solution to the concatenated coding problem for HDTV transmission is to simply extend the modulation codes developed for QPSK-to-QAM modulation. In nonconcatenated situations, a trellis code based on a binary code at rate 2/3 is usually best. However, this is not the case for higher error rates at the output of the trellis decoder (e.g., when a symbol error correcting decoder follows as a concatenated code). The reason for this follows from an analysis of the effect of the number of nearest neighbors on the error rate. A four-way partition of QAM is a natural extension of QPSK modulation; it is a simple matter to incorporate any good QPSK code into a trellis coding scheme for QAM modulation. A concatenated coding scheme based on QPSK trellis codes and symbol error correcting coding is proposed. An example is presented to show the advantages of this approach  相似文献   

4.
A Bidirectional Efficient Algorithm for Searching code Trees (BEAST) is proposed for efficient soft-output decoding of block codes and concatenated block codes. BEAST operates on trees corresponding to the minimal trellis of a block code and finds a list of the most probable codewords. The complexity of the BEAST search is significantly lower than the complexity of trellis-based algorithms, such as the Viterbi algorithm and its list generalizations. The outputs of BEAST, a list of best codewords and their metrics, are used to obtain approximate a posteriori probabilities (APPs) of the transmitted symbols, yielding a soft-input soft-output (SISO) symbol decoder referred to as the BEAST-APP decoder. This decoder is employed as a component decoder in iterative schemes for decoding of product and incomplete product codes. Its performance and convergence behavior are investigated using extrinsic information transfer (EXIT) charts and compared to existing decoding schemes. It is shown that the BEAST-APP decoder achieves performances close to the Bahl–Cocke–Jelinek–Raviv (BCJR) decoder with a substantially lower computational complexity.   相似文献   

5.
Trellis source codes consist of a finite-state machine decoder and a trellis search algorithm, such as the Viterbi algorithm, as the encoder. The encoder experiments with a local copy of the decoder and determines the best channel path map in the sense that it will yield the smallest average distortion between the source sequence and the reproduction sequence given the codebook. In this paper we present a coding system and a design algorithm for predictive trellis coding. Results obtained via simulation are compared for trellis and predictive trellis codes designed for first-order autoregressive sources with Gaussian and Laplacian innovations and for sampled speech. On a random source which models speech, simulation results of the predictive and nonpredictive trellis codes designed by the generalized Lloyd algorithm and those obtained by other researchers are compared. Issues related to computational complexity, the effects of initial codebook selection, training sequence segmentation, search length, channel errors, and algorithm convergence are addressed.  相似文献   

6.
Limited search trellis decoding of convolutional codes   总被引:1,自引:0,他引:1  
The least storage and node computation required by a breadth-first tree or trellis decoder that corrects t errors over the binary symmetric channels is calculated. Breadth-first decoders work with code paths of the same length, without backtracking. The Viterbi algorithm is an exhaustive trellis decoder of this type; other schemes look at a subset of the tree or trellis paths. For random tree codes, theorems about the asymptotic number of paths required and their depth are proved. For concrete convolutional codes, the worst case storage for t error sequences is measured. In both cases the optimal decoder storage has the same simple dependence on t. The M algorithm and algorithms proposed by G.J. Foschini (ibid., vol.IT-23, p.605-9, Sept. 1977) and by S.J. Simmons (PhD. diss., Queens Univ., Kingston, Ont., Canada) are optimal, or nearly so; they are all far more efficient than the Viterbi algorithm  相似文献   

7.
A trellis code is {em homogeneous} if the number of branches emanating from each node (or state) in the trellis diagram is constant. For example, convolutional codes are linear homogeneous trellis codes. A trellis code is {em nonhomogeneous} if the number of branches emanating from each node in the trellis diagram is not the same. The two-user binary adder channel is a multiple-access channel with two binary inputs,x_{1}andx_{2}, and one ternary output,y = x_{1} + x_{2}, where the addition is done in the real number field. The adder channel is synchronous if both encoders and the decoder maintain block (frame) synchronism. It is quasi-synchronous if the encoders do not start their blocks at the same time, but the decoder knows the position of each block. The difference between the starting times of the blocks is called the slippage. The channel is asynchronous if no block synchronism exists among the encoders and the decoder. Some uniquely decodable code pairs(C_{1}, C_{2})are presented that can be used to transmit information reliably over the quasi-synchronous binary adder channel with two users. One of the codes is a nonhomogeneous trellis code, the other is a common block code. Our code rates are better than Deaett-Wolf codes and are close to or equal to the asymptotic rates of Kasami {em et al}. A method for calculating the rates of nonhomogeneous trellis codes is described. An algorithm for finding more uniquely decodable code pairs for the quasi-synchronous binary adder channel is formulated.  相似文献   

8.
Almost all the probabilistic decoding algorithms known for convolutional codes, perform decoding without prior knowledge of the error locations. Here, we introduce a novel maximum-likelihood decoding algorithm for a new class of convolutional codes named as the state transparent convolutional (STC) codes, which due to their properties error detection and error locating is possible prior to error correction. Hence, their decoding algorithm, termed here as the STC decoder, allows an error correcting algorithm to be applied only to the erroneous portions of the received sequence referred to here as the error spans (ESPs). We further prove that the proposed decoder, which locates the ESPs and applies the Viterbi algorithm (VA) only to these portions, always yields a decoded path in trellis identical to the one generated by the Viterbi decoder (VD). Due to the fact that the STC decoder applies the VA only to the ESPs, hence percentage of the single-stage (per codeword) trellis decoding performed by the STC decoder is considerably less than the VD, which is applied to the entire received sequence and this reduction is overwhelming for the fading channels, where the erroneous codewords are mostly clustered. Furthermore, through applying the VA only to the ESPs, the resulting algorithm can be viewed as a new formulation of the VD for the STC codes that analogous to the block decoding algorithms provides a predecoding error detection and error locating capabilities, while performing less single-stage trellis decoding.  相似文献   

9.
李光球 《电讯技术》1998,38(1):33-38
本文论述了Turbo码的基本原理和译码器结构。介绍一种联合Turbo码调制方案,并将之与格状编码调制方案进行比较。  相似文献   

10.
Results on efficient forms of decoding convolutional codes based on the Viterbi algorithm by using systolic arrays are presented. Various properties of convolutional codes are discussed. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. Issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computational overhead ratio are also investigated. The scheme is applicable to both hard and soft decoding of any rate b/n convolutional code. It is shown that as the length of the code becomes large, the systolic Viterbi decoder maintains a regular and general interconnection structure as well as moderate throughput rate gain over the sequential Viterbi decoder  相似文献   

11.
This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of the IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VLSI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered  相似文献   

12.
New algorithms for the design of trellis encoding data compression systems are described. The mare algorithm uses a training sequence of actual data from a source to improve an initial trellis decoder. An additional algorithm extends the constraint length of a given decoder. Combined, these algorithms allow the automatic design of a trellis encoding system for a particular source. The algorithms' effectiveness for random sources is demonstrated through performance comparisons with other source coding systems and with theoretical bounds. The algorithms are applied to the practical problem of the design of trellis and hybrid codes for medium-to-lowrate speech compression.  相似文献   

13.
Concatenated decoding with a reduced-search BCJR algorithm   总被引:5,自引:0,他引:5  
We apply two reduced-computation variants of the BCJR algorithm to the decoding of serial and parallel concatenated convolutional codes. One computes its recursions at only M states per trellis stage; one computes only at states with values above a threshold. The threshold scheme is much more efficient, and it greatly reduces the computation of the BCJR algorithm. By computing only when the channel demands it, the threshold scheme reduces the turbo decoder computation to one-four nodes per trellis stage after the second iteration  相似文献   

14.
In this paper, a novel trellis source encoding scheme based on punctured ring convolutional codes is presented. Joint source and channel coding (JSCC) using trellis coded continuous phase modulation (CPM) with punctured convolutional codes over rings is investigated. The channels considered are the additive white gaussian noise (AWGN) channel and the Rayleigh fading channel. Optimal soft decoding for the proposed JSCC scheme is studied. The soft decoder is based on the a posteriori probability (APP) algorithm for trellis coded CPM with punctured ring convolutional codes. It is shown that these systems with soft decoding outperform the same systems with hard decoding especially when the systems operate at low to medium signal-to-noise ratio (SNR). Furthermore, adaptive JSCC approaches based on the proposed source coding scheme are investigated. Compared with JSCC schemes with fixed source coding rates, the proposed adaptive approaches can achieve much better performance in the high SNR region. The novelties of this work are the development of a trellis source encoding method based on punctured ring convolutional codes, the use of a soft decoder, the APP algorithm for the combined systems and the adaptive approaches to the JSCC problem.  相似文献   

15.
Protecting short data frames by turbo coding is a challenging task because of the small interleaver size and the need for transmission efficiency. In this letter, turbo-decoding-metrics aided short cyclic redundancy check codes are applied to novel tailbiting encoded trellis codes with a twofold purpose: to stop the iterative decoding processes to achieve low-power design and to reduce fractional coding-rate loss. Significant coding gains can be achieved by actually increasing the transmission rate with a negligible increase in power consumption. Performance improvement is demonstrated over additive white Gaussian noise channels. The savings is up to 21.4% for the transmission throughput and 21.5% for the energy consumption of the turbo decoder when frame size 49 is used.  相似文献   

16.
For high rate k/n convolutional codes (k/n > 0.5), a trellis based implementation of a posteriori probability (APP) decoders is less complex on the dual code trellis owing to its branch complexity (2n-k ) being lower than the code trellis (2k). The log scheme used for APP decoders is not attractive for practical implementation owing to heavy quantisation requirements. As an alternative, presented is an arc hyperbolic tangent (AHT) scheme for implementing the dual- APP decoder. The trellis based implementation of this AHT dual APP decoder is discussed and some fundamental differences between primal APP and dual APP decoders that have an effect on a quantised implementation are reported.  相似文献   

17.
In this paper, we present a novel packetized bit-level decoding algorithm for variable-length encoded Markov sources, which calculates reliability information for the decoded bits in the form of a posteriori probabilities (APPs). An interesting feature of the proposed approach is that symbol-based source statistics in the form of the transition probabilities of the Markov source are exploited as a priori information on a bit-level trellis. This method is especially well-suited for long input blocks, since in contrast to other symbol-based APP decoding approaches, the number of trellis states does not depend on the packet length. When additionally the variable-length encoded source data is protected by channel codes, an iterative source-channel decoding scheme can be obtained in the same way as for serially concatenated codes. Furthermore, based on an analysis of the iterative decoder via extrinsic information transfer charts, it can be shown that by using reversible variable-length codes with a free distance of two, in combination with rate-1 channel codes and residual source redundancy, a reliable transmission is possible even for highly corrupted channels. This justifies a new source-channel encoding technique where explicit redundancy for error protection is only added in the source encoder.  相似文献   

18.
This paper deals with a posteriori probability (APP) decoding of high-rate convolutional codes, using the dual code's trellis. After deriving the dual APP (DAPP) algorithm from the APP relation, its trellis-based implementation is addressed. The challenge involved in practical implementation of a DAPP decoder is then highlighted. Metric representation schemes similar to the log domain used for log-APP decoding are shown to be unattractive for DAPP decoding due to quantization requirements. After explaining the nature of the DAPP metrics, an arc hyperbolic tangent (AHT) scheme is proposed and its equivalent arithmetic operations derived. By using an efficient approximation, an addition is translated to an addition in the AHT domain. Efficient techniques for normalization and extrinsic log-likelihood ratio (LLR ) calculation are presented which reduce implementation complexity significantly. Simulation results with different high-rate codes are given to show that the AHT-DAPP decoder performs similarly to a log-APP decoder and at the same time performs better than a decoder for a punctured code. A fully fixed-point model of an AHT-DAPP decoder is shown to perform close to an optimum decoder. The decoding complexity of the log-APP and AHT-DAPP decoders are listed and compared for several rate-k/(k+1) codes. It is shown that an AHT-DAPP decoder starts to be less complex from a code rate of 7/8 . When compared against a max-log-APP decoder, the AHT-DAPP decoder is less complex at a code rate of 9/10 and above.  相似文献   

19.
Random linear network coding is an efficient technique for disseminating information in networks, but it is highly susceptible to errors. Kötter-Kschischang (KK) codes and Mahdavifar-Vardy (MV) codes are two important families of subspace codes that provide error control in noncoherent random linear network coding. List decoding has been used to decode MV codes beyond half distance. Existing hardware implementations of the rank metric decoder for KK codes suffer from limited throughput, long latency and high area complexity. The interpolation-based list decoding algorithm for MV codes still has high computational complexity, and its feasibility for hardware implementations has not been investigated. In this paper we propose efficient decoder architectures for both KK and MV codes and present their hardware implementations. Two serial architectures are proposed for KK and MV codes, respectively. An unfolded decoder architecture, which offers high throughput, is also proposed for KK codes. The synthesis results show that the proposed architectures for KK codes are much more efficient than rank metric decoder architectures, and demonstrate that the proposed decoder architecture for MV codes is affordable.  相似文献   

20.
Multilevel codes show better performance compared with trellis codes on Rayleigh fading channels at comparable decoder complexity and bandwidth. However, they suffer from performance degradation due to error propagation in the multistage decoder. The authors, with a view to minimising the error propagation, compare three multilevel coded 16-QAM schemes which are four-level codes, I/Q separated two-level codes and I/Q separated two-level codes with a new set partitioning  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号