首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider the problem of decoding predictively encoded signal over a noisy channel when there is residual redundancy (captured by a /spl gamma/-order Markov model) in the sequence of transmitted data. Our objective is to minimize the mean-squared error (MSE) in the reconstruction of the original signal (input to the predictive source coder). The problem is formulated and solved through minimum mean-squared error (MMSE) decoding of a sequence of samples over a memoryless noisy channel. The related previous works include several maximum a posteriori (MAP) and MMSE-based decoders. The MAP-based approaches are suboptimal when the performance criterion is the MSE. On the other hand, the previously known MMSE-based approaches are suboptimal, since they are designed to efficiently reconstruct the data samples received (the prediction residues) rather than the original signal. The proposed scheme is set up by modeling the source-coder-produced symbols and their redundancy with a trellis structure. Methods are presented to optimize the solutions in terms of complexity. Numerical results and comparisons are provided, which demonstrate the effectiveness of the proposed techniques.  相似文献   

2.
Soft-decision-feedback MAP decoders are developed for joint source/channel decoding (JSCD) which uses the residual redundancy in two-dimensional sources. The source redundancy is described by a second order Markov model which is made available to the receiver for row-by-row decoding, wherein the output for one row is used to aid the decoding of the next row. Performance can be improved by generalizing so as to increase the vertical depth of the decoder. This is called sheet decoding, and entails generalizing trellis decoding of one-dimensional data to trellis decoding of two-dimensional data (2-D). The proposed soft-decision-feedback sheet decoder is based on the Bahl algorithm, and it is compared to a hard-decision-feedback sheet decoder which is based on the Viterbi algorithm. The method is applied to 3-bit DPCM picture transmission over a binary symmetric channel, and it is found that the soft-decision-feedback decoder with vertical depth V performs approximately as well as the hard-decision-feedback decoder with vertical depth V+1. Because the computational requirement of the decoders depends exponentially on the vertical depth, the soft-decision-feedbark decoder offers significant reduction in complexity. For standard monochrome Lena, at a channel bit error rate of 0.05, the V=1 and V=2 soft-decision-feedback decoder JSCD gains in RSNR are 5.0 and 6.3 dB, respectively.  相似文献   

3.
We present a novel symbol-based soft-input a posteriori probability (APP) decoder for packetized variable-length encoded source indexes transmitted over wireless channels where the residual redundancy after source encoding is exploited for error protection. In combination with a mean-square or maximum APP estimation of the reconstructed source data, the whole decoding process is close to optimal. Furthermore, solutions for the proposed APP decoder with reduced complexity are discussed and compared to the near-optimal solution. When, in addition, channel codes are employed for protecting the variable-length encoded data, an iterative source-channel decoder can be obtained in the same way as for serially concatenated codes, where the proposed APP source decoder then represents one of the two constituent decoders. The simulation results show that this iterative decoding technique leads to substantial error protection for variable-length encoded correlated source signals, especially, when they are transmitted over highly corrupted channels.  相似文献   

4.
An approach to optimal soft decoding for vector quantization (VQ) over a code-division multiple-access (CDMA) channel is presented. The decoder of the system is soft in the sense that the unquantized outputs of the matched filters are utilized directly for decoding (no decisions are taken), and optimal according to the minimum mean-squared error (MMSE) criterion. The derived decoder utilizes a priori source information and knowledge of the channel characteristics to combat channel noise and multiuser interference in an optimal fashion. Hadamard transform representations for the user VQs are employed in the derivation and for the implementation of the decoder. The advantages of this approach are emphasized. Suboptimal versions of the optimal decoder are also considered. Simulations show the soft decoders to outperform decoding based on maximum-likelihood (ML) multiuser detection. Furthermore, the suboptimal versions are demonstrated to perform close to the optimal, at a significantly lower complexity in the number of users. The introduced decoders are, moreover, shown to exhibit near-far resistance. Simulations also demonstrate that combined source-channel encoding, with joint source-channel and multiuser decoding, can significantly outperform a tandem source-channel coding scheme employing multiuser detection plus table lookup source decoding  相似文献   

5.
In this paper, a doubly-iterative linear receiver, equipped with a soft-information aided frequency domain minimum mean-squared error (MMSE) equalizer, is proposed for the combined equalization and decoding of coded continuous phase modulation (CPM) signals over long multipath fading channels. In the proposed receiver architecture, the front-end frequency domain equalizer (FDE) is followed by the soft-input, softoutput (SISO) CPM demodulator and channel decoder modules. The receiver employs double turbo processing by performing back-end demodulation/decoding iterations per each equalization iteration to improve the a priori information for the front-end FDE. As presented by the computational complexity analysis and simulations, this process provides not only a significant reduction in the overall computational complexity, but also a performance improvement over the previously proposed iterative and noniterative MMSE receivers.  相似文献   

6.
Joint source-channel coding is an effective approach for the design of bandwidth efficient and error resilient communication systems with manageable complexity. An interesting research direction within this framework is the design of source decoders that exploit the residual redundancy for effective signal reconstruction at the receiver. Such source decoders are expected to replace the traditionally heuristic error concealment units that are elements of most multimedia communication systems. In this paper, we consider the reconstruction of signals encoded with a multistage vector quantizer (MSVQ) and transmitted over a noisy communications channel. The MSVQ maintains a moderate complexity and, due to its successive refinement feature, is a suitable choice for the design of layered (progressive) source codes. An approximate minimum mean squared error source decoder for MSVQ is presented, and its application to the reconstruction of the linear predictive coefficient (LPC) parameters in mixed excitation linear prediction (MELP) speech codec is analyzed. MELP is a low-rate standard speech codec suitable for bandwidth-limited communications and wireless applications. Numerical results demonstrate the effectiveness of the proposed schemes  相似文献   

7.
二进制LDPC码译码改进算法主要是提升硬判决性能或者降低软判决计算复杂度。本文应用高斯-马尔可夫随机场(Markov Random Field,MRF)模型实现信源参数估计,对信道译码端接收的比特序列进行对数似然比修正,在译码时加入信源的残留冗余信息来增加译码器的纠错能力。信源估计修正系数自适应可变,是由误码率参数调控。在计算复杂度不变的情况下,基于MRF的LDPC码译码算法有效提高了译码性能,降低误比特率  相似文献   

8.
In many applications, an uncompressed source stream is systematically encoded by a channel code (which ignores the source redundancy) for transmission over a discrete memoryless channel. The decoder knows the channel and the code but does not know the source statistics. This paper proposes several universal channel decoders that take advantage of the source redundancy without requiring prior knowledge of its statistics.  相似文献   

9.
Minimum mean square error (MMSE) decoding in a large-scale sensor network which employs distributed quantization is considered. Given that the computational complexity of the optimal decoder is exponential in the network size, we present a framework based on Bayesian networks for designing a near-optimal decoder whose complexity is only linear in network size (hence scalable). In this approach, a complexity-constrained factor graph, which approximately represents the prior joint distribution of the sensor outputs, is obtained by constructing an equivalent Bayesian network using the maximum likelihood (ML) criterion. The decoder executes the sum-product algorithm on the simplified factor graph. Our simulation results have shown that the scalable decoders constructed using the proposed approach perform close to optimal, with both Gaussian and non-Gaussian sensor data.  相似文献   

10.
In this paper, a doubly iterative receiver is proposed for joint turbo equalization, demodulation, and decoding of coded binary continuous-phase modulation (CPM) in multipath fading channels. The proposed receiver consists of three soft-input soft-output (SISO) blocks: a front-end soft-information-aided minimum mean square error (MMSE) equalizer followed by a CPM demodulator and a back-end channel decoder. The MMSE equalizer, combined with an a priori soft-interference canceler (SIC) and an a posteriori probability mapper, forms a SISO processor suitable for iterative processing that considers discrete-time CPM symbols which belong to a finite alphabet. The SISO CPM demodulator and the SISO channel decoder are both implemented by the a posteriori probability algorithm. The proposed doubly iterative receiver has a central demodulator coupled with both the front-end equalizer and the back-end channel decoder. A few back-end demodulation/decoding iterations are performed for each equalization iteration so as to improve the a priori information for the equalizer. As presented in the extrinsic information transfer (EXIT) chart analysis and simulation results for different multipath fading channels, this provides not only faster convergence to low bit error rates, but also lower computational complexity.  相似文献   

11.
In previous work on source coding over noisy channels it was recognized that when the source has memory, there is typically “residual redundancy” between the discrete symbols produced by the encoder, which can be capitalized upon by the decoder to improve the overall quantizer performance. Sayood and Borkenhagen (1991) and Phamdo and Farvardin (see IEEE Trans. Inform. Theory, vol.40, p.186-93, 1994) proposed “detectors” at the decoder which optimize suitable criteria in order to estimate the sequence of transmitted symbols. Phamdo and Farvardin also proposed an instantaneous approximate minimum mean-squared error (IAMMSE) decoder. These methods provide a performance advantage over conventional systems, but the maximum a posteriori (MAP) structure is suboptimal, while the IAMMSE decoder makes limited use of the redundancy. Alternatively, combining aspects of both approaches, we propose a sequence-based approximate MMSE (SAMMSE) decoder. For a Markovian sequence of encoder-produced symbols and a discrete memoryless channel, we approximate the expected distortion at the decoder under the constraint of fixed decoder complexity. For this simplified cost, the optimal decoder computes expected values based on a discrete hidden Markov model, using the wellknown forward/backward (F/B) algorithm. Performance gains for this scheme are demonstrated over previous techniques in quantizing Gauss-Markov sources over a range of noisy channel conditions. Moreover, a constrained delay version is also suggested  相似文献   

12.
Finite-state vector quantization (FSVQ) over a noisy channel is studied. A major drawback of a finite-state decoder is its inability to track the encoder in the presence of channel noise. In order to overcome this problem, we propose a nontracking decoder which directly estimates the code vectors used by a finite-state encoder. The design of channel-matched finite-state vector quantizers for noisy channels, using an iterative scheme resembling the generalized Lloyd algorithm, is also investigated. Simulation results based on encoding a Gauss-Markov source over a memoryless Gaussian channel show that the proposed decoder exhibits graceful degradation of performance with increasing channel noise, as compared with a finite-state decoder. Also, the channel-matched finite-state vector quantizers are shown to outperform channel-optimized vector quantizers having the same vector dimension and rate. However, the nontracking decoder used in the channel-matched finite-state quantizer has a higher computational complexity, compared with a channel-optimized vector-quantizer decoder. Thus, if they are allowed to have the same overall complexity (encoding and decoding), the channel-optimized vector quantizer can use a longer encoding delay and achieve similar or better performance. Finally, an example of using the channel-matched finite-state quantizer as a backward-adaptive quantizer for nonstationary signals is also presented.  相似文献   

13.
Although Low-Density Parity-Check (LDPC) codes perform admirably for large block sizes — being mostly resilient to low levels of channel SNR and errors in channel equalization — real time operation and low computational effort require small and medium sized codes, which tend to be affected by these two factors. For these small to medium codes, a method for designing efficient regular codes is presented and a new technique for reducing the dependency of correct channel equalization, without much change in the inner workings or architecture of existing LDPC decoders is proposed. This goal is achieved by an improved intrinsic Log-Likelihood Ratio (LLR) estimator in the LDPC decoder — the ILE-Decoder, which only uses LDPC decoder-side information gathered during standard LDPC decoding. This information is used to improve the channel parameters estimation, thus improving the reliability of the code correction, while reducing the number of required iterations for a successful decoding. Methods for fast encoding and decoding of LDPC codes are presented, highlighting the importance of assuring low encoding/decoding latency with maintaining high throughput. The assumptions and rules that govern the estimation process via subcarrier corrected-bit accounting are presented, and the Bayesian inference estimation process is detailed. This scheme is suitable for application to multicarrier communications, such as OFDM. Simulation results in a PLC-like environment that confirm the good performance of the proposed LDPC coder/decoder are presented.  相似文献   

14.
A parallel MAP algorithm for low latency turbo decoding   总被引:1,自引:0,他引:1  
To reduce the computational decoding delay of turbo codes, we propose a parallel algorithm for maximum a posteriori (MAP) decoders. We divide a whole noisy codeword into sub-blocks and use multiple processors to perform sub-block MAP decoding in parallel. Unlike the previously proposed approach with sub-block overlapping, we utilize the forward and backward variables computed in the previous iteration to provide boundary distributions for each sub-block MAP decoder. Our scheme depicts asymptotically optimal performance in the sense that the BER is the same as that of the regular turbo decoder  相似文献   

15.
文中提出一种利用残留冗余的RDPCM信源信道联合编码系统与最小均方误差估计结合的方法.首先,本文针对联合编码系统修正了SOVA算法,在接收端获得利用残留冗余后的比特似然度;然后利用这些后验信息,对信源预测编码器的输出符号值进行最小均方误差重建后再进行信源译码,从而减小了由于硬判决得到符号值所带来的失真.仿真结果显示这种算法在信噪比的低端最大得到了约2dB的增益.  相似文献   

16.
We propose new decoders for decoding convolutional codes over finite-state channels. These decoders are sequential and utilize the information about the channel state sequence contained in the channel output sequence. The performance of these decoders is evaluated by simulation and compared to the performance of memoryless decoders with and without interleaving. Our results show that the performance of these decoders is good whenever the channel statistics are such that the joint estimate of the channel state sequence and the channel input sequence is good, as, for example, when the channel is bursty. In these cases using even a partial search decoder such as the Fano decoder over the appropriate trellis is nearly optimal. However, when the information between the output sequence and the sequence of channel slates and inputs diminishes, the memoryless decoder with interleaving outperforms even the optimal decoder which knows the channel state  相似文献   

17.
This paper studies an application of turbo codes to compressed image/video transmission and presents an approach to improving error control performance through joint channel and source decoding (JCSD). The proposed approach to JCSD includes error-free source information feedback, error-detected source information feedback, and the use of channel soft values (CSV) for source signal postprocessing. These feedback schemes are based on a modification of the extrinsic information passed between the constituent maximum a posteriori probability (MAP) decoders in a turbo decoder. The modification is made according to the source information obtained from the source signal processor. The CSVs are considered as reliability information on the hard decisions and are further used for error recovery in the reconstructed signals. Applications of this joint decoding technique to different visual source coding schemes, such as spatial vector quantization, JPEG coding, and MPEG coding, are examined. Experimental results show that up to 0.6 dB of channel SNR reduction can be achieved by the joint decoder without increasing computational cost for various channel coding rates  相似文献   

18.
Joint source channel techniques based on Variable-Length Coding (VLC) have been widely used. One of the most famous VLC decoders is optimal Maximum A Posteriori (MAP) decoder based on directed graph search and soft-input theory. Due to the high complexity of directed graph search, many reduced complexity methods have been proposed. In this paper, we propose two error restricted algorithms for fast MAP decoding of VLC and compare them with three existing methods. Simulation results show that our methods outperform existing methods in terms of decoding complexity with nearly the same performance on Symbol Error Rate (SER) of optimal decoding. When used in a larger codeword set, the superiority in decoding complexity of our methods is more remarkable.  相似文献   

19.
Utilization of redundancy left in a channel coded sequence can improve channel decoding performance. Stronger improvement can usually be achieved with nonsystematic encoding. However, nonsystematic codes recently proposed for this problem are not robust to the statistical parameters governing a sequence and thus should not be used without prior knowledge of these parameters. In this work, decoders of nonsystematic quick-look-in turbo codes are adapted to extract and exploit redundancy left in coded data to improve channel decoding performance. Methods, based on universal compression and denoising, for extracting the governing statistical parameters for various source models are integrated into the channel decoder by also taking advantage of the code structure. Simulation results demonstrate significant performance gains over standard systematic codes that can be achieved with the new methods for a wide range of statistical models and governing parameters. In many cases, performance almost as good as that with perfect knowledge of the governing parameters is achievable.  相似文献   

20.
Several recent publications have shown that joint source-channel decoding could be a powerful technique to take advantage of residual source redundancy for fixed- and variable-length source codes. This letter gives an in-depth analysis of a low-complexity method recently proposed by Guivarch et al., where the redundancy left by a Huffman encoder is used at a bit level in the channel decoder to improve its performance. Several simulation results are presented, showing for two first-order Markov sources of different sizes that using a priori knowledge of the source statistics yields a significant improvement, either with a Viterbi channel decoder or with a turbo decoder.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号