首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 206 毫秒
1.
In this paper, we present a novel packetized bit-level decoding algorithm for variable-length encoded Markov sources, which calculates reliability information for the decoded bits in the form of a posteriori probabilities (APPs). An interesting feature of the proposed approach is that symbol-based source statistics in the form of the transition probabilities of the Markov source are exploited as a priori information on a bit-level trellis. This method is especially well-suited for long input blocks, since in contrast to other symbol-based APP decoding approaches, the number of trellis states does not depend on the packet length. When additionally the variable-length encoded source data is protected by channel codes, an iterative source-channel decoding scheme can be obtained in the same way as for serially concatenated codes. Furthermore, based on an analysis of the iterative decoder via extrinsic information transfer charts, it can be shown that by using reversible variable-length codes with a free distance of two, in combination with rate-1 channel codes and residual source redundancy, a reliable transmission is possible even for highly corrupted channels. This justifies a new source-channel encoding technique where explicit redundancy for error protection is only added in the source encoder.  相似文献   

2.
In this letter, we present an improved index-based a-posteriori probability (APP) decoding approach for the error-resilient transmission of packetized variable-length encoded Markov sources. The proposed algorithm is based on a novel two-dimensional (2D) state representation which leads to a three-dimensional trellis with unique state transitions. APP decoding on this trellis is realized by employing a 2D version of the BCJR algorithm where all available source statistics can be fully exploited in the source decoder. For an additional use of channel codes the proposed approach leads to an increased error-correction performance compared to a one-dimensional state representation.  相似文献   

3.
Joint source-channel decoding based on residual source redundancy is an effective paradigm for error-resilient data compression. While previous work only considered fixed-rate systems, the extension of these techniques for variable-length encoded data was independently proposed by the authors and by Demir and Sayood (see Proc. Data Comp. Conf., Snowbird, UT, p.139-48, 1998). We describe and compare the performance of a computationally complex exact maximum a posteriori (MAP) decoder, its efficient approximation, an alternative approximate decoder, and an improved version of this decoder are suggested. Moreover, we evaluate several source and channel coding configurations. The results show that our approximate MAP technique outperforms other approximate methods and provides substantial error protection to variable-length encoded data  相似文献   

4.
We propose an optimal joint source-channel maximum a posteriori probability decoder for variable-length encoded sources transmitted over a wireless channel, modeled as an additive-Markov channel. The state space introduced by the authors in a previous paper is used to take care of the unique challenges posed by variable-length codes. Simulations demonstrate, that this decoder performs substantially better than the standard Huffman decoder for a simple test source and is robust to inaccuracies in channel statistics estimates. The proposed algorithm also compares favorably to a standard forward error correction-based system.  相似文献   

5.
This paper proposes an optimal maximum a posteriori probability decoder for variable-length encoded sources over binary symmetric channels (BSC) that uses a novel state-space to deal with the problem of variable-length source codes in the decoder. This sequential, finite-delay, joint source-channel decoder delivers substantial improvements over the conventional decoder and also over a system that uses a standard forward error correcting code operating at the same over all bit rates. This decoder is also robust to inaccuracies in the estimation of channel statistics  相似文献   

6.
Distributed Joint Source-Channel Coding of Video Using Raptor Codes   总被引:1,自引:0,他引:1  
Extending recent works on distributed source coding, this paper considers distributed source-channel coding and targets at the important application of scalable video transmission over wireless networks. The idea is to use a single channel code for both video compression (via Slepian-Wolf coding) and packet loss protection. First, we provide a theoretical code design framework for distributed joint source-channel coding over erasure channels and then apply it to the targeted video application. The resulting video coder is based on a cross-layer design where video compression and protection are performed jointly. We choose Raptor codes - the best approximation to a digital fountain - and address in detail both encoder and decoder designs. Using the received packets together with a correlated video available at the decoder as side information, we devise a new iterative soft-decision decoder for joint Raptor decoding. Simulation results show that, compared to one separate design using Slepian-Wolf compression plus erasure protection and another based on FGS coding plus erasure protection, the proposed joint design provides better video quality at the same number of transmitted packets. Our work represents the first in capitalizing the latest in distributed source coding and near-capacity channel coding for robust video transmission over erasure channels.  相似文献   

7.
李建平  梁庆林 《电讯技术》2004,44(6):119-121
本文通过调整迭代解码过程中系统位接收值的加权系数,提出了一种Turbo码加权迭代解码算法。该算法改变了迭代运算后Turbo码解码器输出软值中系统位接收值信息和它的外部估计信息的比重,使Turbo码无论在低信噪比或是在高信噪比时均具有优良的纠错性能。仿真结果显示,采用Turbo码加权迭代解码算法,不仅能提高Turbo码的收敛速度,而且能进一步降低Turbo码解码时的地板值,使Turbo码的比特误码率在高、低信噪比时都能够得到进一步改善。  相似文献   

8.
In this paper, a novel trellis source encoding scheme based on punctured ring convolutional codes is presented. Joint source and channel coding (JSCC) using trellis coded continuous phase modulation (CPM) with punctured convolutional codes over rings is investigated. The channels considered are the additive white gaussian noise (AWGN) channel and the Rayleigh fading channel. Optimal soft decoding for the proposed JSCC scheme is studied. The soft decoder is based on the a posteriori probability (APP) algorithm for trellis coded CPM with punctured ring convolutional codes. It is shown that these systems with soft decoding outperform the same systems with hard decoding especially when the systems operate at low to medium signal-to-noise ratio (SNR). Furthermore, adaptive JSCC approaches based on the proposed source coding scheme are investigated. Compared with JSCC schemes with fixed source coding rates, the proposed adaptive approaches can achieve much better performance in the high SNR region. The novelties of this work are the development of a trellis source encoding method based on punctured ring convolutional codes, the use of a soft decoder, the APP algorithm for the combined systems and the adaptive approaches to the JSCC problem.  相似文献   

9.
Several recent publications have shown that joint source-channel decoding could be a powerful technique to take advantage of residual source redundancy for fixed- and variable-length source codes. This letter gives an in-depth analysis of a low-complexity method recently proposed by Guivarch et al., where the redundancy left by a Huffman encoder is used at a bit level in the channel decoder to improve its performance. Several simulation results are presented, showing for two first-order Markov sources of different sizes that using a priori knowledge of the source statistics yields a significant improvement, either with a Viterbi channel decoder or with a turbo decoder.  相似文献   

10.
Turbo codes are a practical solution for achieving large coding gains. We present a new turbo coding scheme where the component codes are convolutional codes (CCs) over the ring of integers modulo M, with M being the alphabet size of the source encoder. The a priori knowledge of the source statistics is used during the iterative decoding procedure for improved decoder performance. As an example of application, we examine differential pulse code modulation (DPCM) encoded image transmission  相似文献   

11.
We propose a joint source-channel decoding approach for multidimensional correlated source signals. A Markov random field (MRF) source model is used which exemplarily considers the residual spatial correlations in an image signal after source encoding. Furthermore, the MRF parameters are selected via an analysis based on extrinsic information transfer charts. Due to the link between MRFs and the Gibbs distribution, the resulting soft-input soft-output (SISO) source decoder can be implemented with very low complexity. We prove that the inclusion of a high-rate block code after the quantization stage allows the MRF-based decoder to yield the maximum average extrinsic information. When channel codes are used for additional error protection the MRF-based SISO source decoder can be used as the outer constituent decoder in an iterative source-channel decoding scheme. Considering an example of a simple image transmission system we show that iterative decoding can be successfully employed for recovering the image data, especially when the channel is heavily corrupted.  相似文献   

12.
A method for estimating the performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms on binary symmetric channels (BSCs) is proposed. Based on the enumeration of the smallest weight error patterns that cannot be all corrected by the decoder, this method estimates both the frame error rate (FER) and the bit error rate (BER) of a given LDPC code with very good precision for all crossover probabilities of practical interest. Through a number of examples, we show that the proposed method can be effectively applied to both regular and irregular LDPC codes and to a variety of hard-decision iterative decoding algorithms. Compared with the conventional Monte Carlo simulation, the proposed method has a much smaller computational complexity, particularly for lower error rates.  相似文献   

13.
We propose an augmented belief propagation (BP) decoder for low-density parity check (LDPC) codes which can be utilized on memoryless or intersymbol interference channels. The proposed method is a heuristic algorithm that eliminates a large number of pseudocodewords that can cause nonconvergence in the BP decoder. The augmented decoder is a multistage iterative decoder, where, at each stage, the original channel messages on select symbol nodes are replaced by saturated messages. The key element of the proposed method is the symbol selection process, which is based on the appropriately defined subgraphs of the code graph and/or the reliability of the information received from the channel. We demonstrate by examples that this decoder can be implemented to achieve substantial gains (compared to the standard locally-operating BP decoder) for short LDPC codes decoded on both memoryless and intersymbol interference Gaussian channels. Using the Margulis code example, we also show that the augmented decoder reduces the error floors. Finally, we discuss types of BP decoding errors and relate them to the augmented BP decoder.  相似文献   

14.
Rateless Codes With Unequal Error Protection Property   总被引:2,自引:0,他引:2  
In this correspondence, a generalization of rateless codes is proposed. The proposed codes provide unequal error protection (UEP). The asymptotic properties of these codes under the iterative decoding are investigated. Moreover, upper and lower bounds on maximum-likelihood (ML) decoding error probabilities of finite-length LT and Raptor codes for both equal and unequal error protection schemes are derived. Further, our work is verified with simulations. Simulation results indicate that the proposed codes provide desirable UEP. We also note that the UEP property does not impose a considerable drawback on the overall performance of the codes. Moreover, we discuss that the proposed codes can provide unequal recovery time (URT). This means that given a target bit error rate, different parts of information bits can be decoded after receiving different amounts of encoded bits. This implies that the information bits can be recovered in a progressive manner. This URT property may be used for sequential data recovery in video/audio streaming  相似文献   

15.
The system under study is a coded asynchronous DS-CDMA system with orthogonal modulation in time-varying Rayleigh fading multipath channels. Information bits are convolutionally encoded, block interleaved, and mapped to M-ary orthogonal Walsh codes, where the last step is essentially a process of block coding. This paper aims at tackling the problem of joint iterative decoding of this serially concatenated inner block code and outer convolutional code and estimating frequency-selective fading channels in multiuser environments. The (logarithm) maximum a posteriori probability, (Log)-MAP criterion is used to derive the iterative decoding schemes. In our system, the soft output from inner block decoder is used as a priori information for the outer decoder. The soft output from outer convolutional decoder is used for two purposes. First, it may be fed back to the inner decoder as extrinsic information for the systematic bits of the Walsh codeword. Secondly, it is utilized for channel estimation and multiuser detection (MUD). We also show that the inner decoding can be accomplished without extrinsic information, and in some cases, e.g., when the system is heavily loaded, yields better performance than the decoding with unprocessed extrinsic information. This implies the need for correcting the extrinsic information obtained from outer decoder. Different schemes are examined and compared numerically, and it is shown that iterative decoding with properly corrected extrinsic information or with non-extrinsic/extrinsic adaptation enables the system to operate reliably in the presence of severe multiuser interference, especially when the inner decoding is assisted by decision directed channel estimation and interference cancellation techniques.  相似文献   

16.
Concatenated coding schemes consist of the combination of two or more simple constituent encoders and interleavers. The parallel concatenation known as “turbo code” has been shown to yield remarkable coding gains close to theoretical limits, yet admitting a relatively simple iterative decoding technique. The recently proposed serial concatenation of interleaved codes may offer superior performance to that of turbo codes. In both coding schemes, the core of the iterative decoding structure is a soft-input soft-output (SISO) a posteriori probability (APP) module. In this letter, we describe the SISO APP module that updates the APP's corresponding to the input and the output bits, of a code, and show how to embed it into an iterative decoder for a new hybrid concatenation of three codes, to fully exploit the benefits of the proposed SISO APP module  相似文献   

17.
18.
Almost all the probabilistic decoding algorithms known for convolutional codes, perform decoding without prior knowledge of the error locations. Here, we introduce a novel maximum-likelihood decoding algorithm for a new class of convolutional codes named as the state transparent convolutional (STC) codes, which due to their properties error detection and error locating is possible prior to error correction. Hence, their decoding algorithm, termed here as the STC decoder, allows an error correcting algorithm to be applied only to the erroneous portions of the received sequence referred to here as the error spans (ESPs). We further prove that the proposed decoder, which locates the ESPs and applies the Viterbi algorithm (VA) only to these portions, always yields a decoded path in trellis identical to the one generated by the Viterbi decoder (VD). Due to the fact that the STC decoder applies the VA only to the ESPs, hence percentage of the single-stage (per codeword) trellis decoding performed by the STC decoder is considerably less than the VD, which is applied to the entire received sequence and this reduction is overwhelming for the fading channels, where the erroneous codewords are mostly clustered. Furthermore, through applying the VA only to the ESPs, the resulting algorithm can be viewed as a new formulation of the VD for the STC codes that analogous to the block decoding algorithms provides a predecoding error detection and error locating capabilities, while performing less single-stage trellis decoding.  相似文献   

19.
We consider the iterative decoding of generalized low-density (GLD) parity-check codes where, rather than employ an optimal subcode decoder, a Chase (1972) algorithm decoder more commonly associated with "turbo product codes" is used. GLD codes are low-density graph codes in which the constraint nodes are other than single parity-checks. For extended Hamming-based GLD codes, we use bit error rates derived by simulation to demonstrate this new strategy to be successful at higher code rates. For long block lengths, good performance close to capacity is possible with decoding costs reduced further since the Chase decoder employed is an efficient implementation.  相似文献   

20.
针对RS码与LDPC码的串行级联结构,提出了一种基于自适应置信传播(ABP)的联合迭代译码方法.译码时,LDPC码置信传播译码器输出的软信息作为RS码ABP译码器的输入;经过一定迭代译码后,RS码译码器输出的软信息又作为LDPC译码器的输入.软输入软输出的RS译码器与LDPC译码器之间经过多次信息传递,译码性能有很大提高.码长中等的LDPC码采用这种级联方案,可以有效克服短环的影响,消除错误平层.仿真结果显示:AWGN信道下这种基于ABP的RS码与LDPC码的联合迭代译码方案可以获得约0.8 dB的增益.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号