首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A general decoding method for cyclic codes is presented which gives promise of substantially reducing the complexity of decoders at the cost of a modest increase in decoding time (or delay). Significant reductions in decoder complexity for binary cyclic finite-geometry codes are demonstrated.  相似文献   

2.
We present a framework for the analysis of the decoding delay in multiview video coding (MVC). We show that in real-time applications, an accurate estimation of the decoding delay is essential to achieve a minimum communication latency. As opposed to single-view codecs, the complexity of the multiview prediction structure and the parallel decoding of several views requires a systematic analysis of this decoding delay, which we solve using graph theory and a model of the decoder hardware architecture. Our framework assumes a decoder implementation in general purpose multi-core processors with multi-threading capabilities. For this hardware model, we show that frame processing times depend on the computational load of the decoder and we provide an iterative algorithm to compute jointly frame processing times and decoding delay. Finally, we show that decoding delay analysis can be applied to design decoders with the objective of minimizing the communication latency of the MVC system.  相似文献   

3.
This paper considers a class of iterative message-passing decoders for low-density parity-check codes in which the decoder can choose its decoding rule from a set of decoding algorithms at each iteration. Each available decoding algorithm may have different per-iteration computation time and performance. With an appropriate choice of algorithm at each iteration, overall decoding latency can be reduced significantly, compared with standard decoding methods. Such a decoder is called a gear-shift decoder because it changes its decoding rule (shifts gears) in order to guarantee both convergence and maximum decoding speed (minimum decoding latency). Using extrinsic information transfer charts, the problem of finding the optimum (minimum decoding latency) gear-shift decoder is formulated as a computationally tractable dynamic program. The optimum gear-shift decoder is proved to have a decoding threshold equal to or better than the best decoding threshold among those of the available algorithms. In addition to speeding up software decoder implementations, gear-shift decoding can be applied to optimize a pipelined hardware decoder, minimizing hardware cost for a given decoder throughput.  相似文献   

4.
This paper proposes a stochastic framework for dynamic modeling and analysis of turbo decoding. By modeling the input and output signals of a turbo decoder as random processes, we prove that these signals become ergodic when the block size of the code becomes very large. This basic result allows us to easily model and compute the statistics of the signals in a turbo decoder. Using the ergodicity result and the fact that a sum of lognormal distributions is well approximated using a lognormal distribution, we show that the input-output signals in a turbo decoder, when expressed using log-likelihood ratios (LLRs), are well approximated using Gaussian distributions. Combining the two results above, we can model a turbo decoder using two input parameters and two output parameters (corresponding to the means and variances of the input and output signals). Using this model, we are able to reveal the whole dynamics of a decoding process. We have discovered that a typical decoding process is much more intricate than previously known, involving two regions of attraction, several fixed points, and a stable equilibrium manifold at which all decoding trajectories converge. Some applications of the stochastic framework are also discussed, including a fast decoding scheme  相似文献   

5.
Almost all the probabilistic decoding algorithms known for convolutional codes, perform decoding without prior knowledge of the error locations. Here, we introduce a novel maximum-likelihood decoding algorithm for a new class of convolutional codes named as the state transparent convolutional (STC) codes, which due to their properties error detection and error locating is possible prior to error correction. Hence, their decoding algorithm, termed here as the STC decoder, allows an error correcting algorithm to be applied only to the erroneous portions of the received sequence referred to here as the error spans (ESPs). We further prove that the proposed decoder, which locates the ESPs and applies the Viterbi algorithm (VA) only to these portions, always yields a decoded path in trellis identical to the one generated by the Viterbi decoder (VD). Due to the fact that the STC decoder applies the VA only to the ESPs, hence percentage of the single-stage (per codeword) trellis decoding performed by the STC decoder is considerably less than the VD, which is applied to the entire received sequence and this reduction is overwhelming for the fading channels, where the erroneous codewords are mostly clustered. Furthermore, through applying the VA only to the ESPs, the resulting algorithm can be viewed as a new formulation of the VD for the STC codes that analogous to the block decoding algorithms provides a predecoding error detection and error locating capabilities, while performing less single-stage trellis decoding.  相似文献   

6.
In this paper, we present an iterative soft-decision decoding algorithm for Reed-Solomon (RS) codes offering both complexity and performance advantages over previously known decoding algorithms. Our algorithm is a list decoding algorithm which combines two powerful soft-decision decoding techniques which were previously regarded in the literature as competitive, namely, the Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation based on adaptive parity-check matrices, recently proposed by Jiang and Narayanan. Building on the Jiang-Narayanan algorithm, we present a belief-propagation-based algorithm with a significant reduction in computational complexity. We introduce the concept of using a belief-propagation-based decoder to enhance the soft-input information prior to decoding with an algebraic soft-decision decoder. Our algorithm can also be viewed as an interpolation multiplicity assignment scheme for algebraic soft-decision decoding of RS codes.  相似文献   

7.
A new decoding method with decoder is used in open-loop all-optical chaotic communi-cation system under strong injection condition. The performance of the new decoding method is nu-merically investigated by comparing it with the common decoding method without decoder. For new decoding method, two cases are analyzed, including whether or not the output of the decoder is ad-justed by its input to receiver. The results indicate the decoding quality can be improved by adjusting for the new decoding method. Meanwhile, the injection strength of decoder can be restricted in a certain range. The adjusted new decoding method with decoder can achieve better decoding quality than decoding method without decoder when the bit rate of message is under 5 Gb/s. However, a stronger injection for receiver is needed. Moreover, the new decoding method can broaden the range of injection strength acceptable for good decoding quality. Different message encryption techniques are tested, and the result is similar to that of the common decoding method, indicative of the fact that the message encoded by using Chaotic Modulation (CM) can be best recovered by the new decoding method owning to the essence of this encryption technique.  相似文献   

8.
In this paper, we propose and present implementation results of a high‐speed turbo decoding algorithm. The latency caused by (de)interleaving and iterative decoding in a conventional maximum a posteriori turbo decoder can be dramatically reduced with the proposed design. The source of the latency reduction is from the combination of the radix‐4, center to top, parallel decoding, and early‐stop algorithms. This reduced latency enables the use of the turbo decoder as a forward error correction scheme in real‐time wireless communication services. The proposed scheme results in a slight degradation in bit error rate performance for large block sizes because the effective interleaver size in a radix‐4 implementation is reduced to half, relative to the conventional method. To prove the latency reduction, we implemented the proposed scheme on a field‐programmable gate array and compared its decoding speed with that of a conventional decoder. The results show an improvement of at least five fold for a single iteration of turbo decoding.  相似文献   

9.
In mobile communications, a class of variable‐complexity algorithms for convolutional decoding known as sequential decoding algorithms is of interest since they have a computational time that could vary with changing channel conditions. The Fano algorithm is one well‐known version of a sequential decoding algorithm. Since the decoding time of a Fano decoder follows the Pareto distribution, which is a heavy‐tailed distribution parameterized by the channel signal‐to‐noise ratio (SNR), buffers are required to absorb the variable decoding delays of Fano decoders. Furthermore, since the decoding time drawn by a certain Pareto distribution can become unbounded, a maximum limit is often employed by a practical decoder to limit the worst‐case decoding time. In this paper, we investigate the relations between buffer occupancy, decoding time, and channel conditions in a system where the Fano decoder is not allowed to run with unbounded decoding time. A timeout limit is thus imposed so that the decoding will be terminated if the decoding time reaches the limit. We use discrete‐time semi‐Markov models to describe such a Fano decoding system with timeout limits. Our queuing analysis provides expressions characterizing the average buffer occupancy as a function of channel conditions and timeout limits. Both numerical and simulation results are provided to validate the analytical results.  相似文献   

10.
A parallel MAP algorithm for low latency turbo decoding   总被引:1,自引:0,他引:1  
To reduce the computational decoding delay of turbo codes, we propose a parallel algorithm for maximum a posteriori (MAP) decoders. We divide a whole noisy codeword into sub-blocks and use multiple processors to perform sub-block MAP decoding in parallel. Unlike the previously proposed approach with sub-block overlapping, we utilize the forward and backward variables computed in the previous iteration to provide boundary distributions for each sub-block MAP decoder. Our scheme depicts asymptotically optimal performance in the sense that the BER is the same as that of the regular turbo decoder  相似文献   

11.
The sliding window (SW) approach has been proposed as an effective means of reducing the memory requirements as well as the decoding latency of the maximum a posteriori (MAP) based soft-input soft-output (SISO) decoder in a Turbo decoder. In this paper, we present sub-banked memory implementations (both single port and dual port) of the SW SISO decoder that achieves high throughput, low decoding latency, and reduced memory energy consumption. Our contributions include derivation of the optimal memory sub-banked structure for different SW configurations, study of the relationship between memory size and energy consumption for different SW configurations and study of the effect of number of sub-banks on the throughput/decoding latency for a given SW configuration.  相似文献   

12.
We propose a joint source-channel decoding approach for multidimensional correlated source signals. A Markov random field (MRF) source model is used which exemplarily considers the residual spatial correlations in an image signal after source encoding. Furthermore, the MRF parameters are selected via an analysis based on extrinsic information transfer charts. Due to the link between MRFs and the Gibbs distribution, the resulting soft-input soft-output (SISO) source decoder can be implemented with very low complexity. We prove that the inclusion of a high-rate block code after the quantization stage allows the MRF-based decoder to yield the maximum average extrinsic information. When channel codes are used for additional error protection the MRF-based SISO source decoder can be used as the outer constituent decoder in an iterative source-channel decoding scheme. Considering an example of a simple image transmission system we show that iterative decoding can be successfully employed for recovering the image data, especially when the channel is heavily corrupted.  相似文献   

13.
Soft-decision-feedback MAP decoders are developed for joint source/channel decoding (JSCD) which uses the residual redundancy in two-dimensional sources. The source redundancy is described by a second order Markov model which is made available to the receiver for row-by-row decoding, wherein the output for one row is used to aid the decoding of the next row. Performance can be improved by generalizing so as to increase the vertical depth of the decoder. This is called sheet decoding, and entails generalizing trellis decoding of one-dimensional data to trellis decoding of two-dimensional data (2-D). The proposed soft-decision-feedback sheet decoder is based on the Bahl algorithm, and it is compared to a hard-decision-feedback sheet decoder which is based on the Viterbi algorithm. The method is applied to 3-bit DPCM picture transmission over a binary symmetric channel, and it is found that the soft-decision-feedback decoder with vertical depth V performs approximately as well as the hard-decision-feedback decoder with vertical depth V+1. Because the computational requirement of the decoders depends exponentially on the vertical depth, the soft-decision-feedbark decoder offers significant reduction in complexity. For standard monochrome Lena, at a channel bit error rate of 0.05, the V=1 and V=2 soft-decision-feedback decoder JSCD gains in RSNR are 5.0 and 6.3 dB, respectively.  相似文献   

14.
We consider the decoding problem for low-density parity-check codes, and apply nonlinear programming methods. This extends previous work using linear programming (LP) to decode linear block codes. First, a multistage LP decoder based on the branch-and-bound method is proposed. This decoder makes use of the maximum-likelihood-certificate property of the LP decoder to refine the results when an error is reported. Second, we transform the original LP decoding formulation into a box-constrained quadratic programming form. Efficient linear-time parallel and serial decoding algorithms are proposed and their convergence properties are investigated. Extensive simulation studies are performed to assess the performance of the proposed decoders. It is seen that the proposed multistage LP decoder outperforms the conventional sum-product (SP) decoder considerably for low-density parity-check (LDPC) codes with short to medium block length. The proposed box-constrained quadratic programming decoder has less complexity than the SP decoder and yields much better performance for LDPC codes with regular structure.  相似文献   

15.
Low-density parity-check (LDPC) codes and convolutional Turbo codes are two of the most powerful error correcting codes that are widely used in modern communication systems. In a multi-mode baseband receiver, both LDPC and Turbo decoders may be required. However, the different decoding approaches for LDPC and Turbo codes usually lead to different hardware architectures. In this paper we propose a unified message passing algorithm for LDPC and Turbo codes and introduce a flexible soft-input soft-output (SISO) module to handle LDPC/Turbo decoding. We employ the trellis-based maximum a posteriori (MAP) algorithm as a bridge between LDPC and Turbo codes decoding. We view the LDPC code as a concatenation of n super-codes where each super-code has a simpler trellis structure so that the MAP algorithm can be easily applied to it. We propose a flexible functional unit (FFU) for MAP processing of LDPC and Turbo codes with a low hardware overhead (about 15% area and timing overhead). Based on the FFU, we propose an area-efficient flexible SISO decoder architecture to support LDPC/Turbo codes decoding. Multiple such SISO modules can be embedded into a parallel decoder for higher decoding throughput. As a case study, a flexible LDPC/Turbo decoder has been synthesized on a TSMC 90 nm CMOS technology with a core area of 3.2 mm2. The decoder can support IEEE 802.16e LDPC codes, IEEE 802.11n LDPC codes, and 3GPP LTE Turbo codes. Running at 500 MHz clock frequency, the decoder can sustain up to 600 Mbps LDPC decoding or 450 Mbps Turbo decoding.  相似文献   

16.
This paper presents several results involving Fano's sequential decoding algorithm for convolutional codes. An upper bound to theath moment of decoder computation is obtained for arbitrary decoder biasBanda leq 1. An upper bound on error probability with sequential decoding is derived for both systematic and nonsystematic convolutional codes. This error bound involves the exact value of the decoder biasB. It is shown that there is a trade-off between sequential decoder computation and error probability as the biasBis varied. It is also shown that for many values ofB, sequential decoding of systematic convolutional codes gives an exponentially larger error probability than sequential decoding of nonsystematic convolutional codes when both codes are designed with exponentially equal optimum decoder error probabilities.  相似文献   

17.
Fano译码算法一般采用软件实现,受制于计算机的结构,译码速度较慢。为大幅度提高译码速度,研究软判决Fano译码算法的全硬件实现方案,即采用AHDL(Ahera硬件描述语言)设计软判决Fano译码译码器,使用FPGA(现场可编程门阵列)予以实现。介绍了总体结构,重点描述构建Fano软判决译码器关键部件——状态机的设计。实测数据表明,在相同时钟频率条件下,软判决Fano译码算法的全硬件实现比软件方案至少快20倍。  相似文献   

18.
In this paper, we propose a flexible turbo decoding algorithm for a high order modulation scheme that uses a standard half‐rate turbo decoder designed for binary quadrature phase‐shift keying (B/QPSK) modulation. A transformation applied to the incoming I‐channel and Q‐channel symbols allows the use of an off‐the‐shelf B/QPSK turbo decoder without any modifications. Iterative codes such as turbo codes process the received symbols recursively to improve performance. As the number of iterations increases, the execution time and power consumption also increase. The proposed algorithm reduces the latency and power consumption by combination of the radix‐4, dual‐path processing, parallel decoding, and early‐stop algorithms. We implement the proposed scheme on a field‐programmable gate array and compare its decoding speed with that of a conventional decoder. The results show that the proposed flexible decoding algorithm is 6.4 times faster than the conventional scheme.  相似文献   

19.
Iterative turbo decoder analysis based on density evolution   总被引:4,自引:0,他引:4  
We track the density of extrinsic information in iterative turbo decoders by actual density evolution, and also approximate it by symmetric Gaussian density functions. The approximate model is verified by experimental measurements. We view the evolution of these density functions through an iterative decoder as a nonlinear dynamical system with feedback. Iterative decoding of turbo codes and of serially concatenated codes is analyzed by examining whether a signal-to-noise ratio (SNR) for the extrinsic information keeps growing with iterations. We define a “noise figure” for the iterative decoder, such that the turbo decoder will converge to the correct codeword if the noise figure is bounded by a number below zero dB. By decomposing the code's noise figure into individual curves of output SNR versus input SNR corresponding to the individual constituent codes, we gain many new insights into the performance of the iterative decoder for different constituents. Many mysteries of turbo codes are explained based on this analysis. For example, we show why certain codes converge better with iterative decoding than more powerful codes which are only suitable for maximum likelihood decoding. The roles of systematic bits and of recursive convolutional codes as constituents of turbo codes are crystallized. The analysis is generalized to serial concatenations of mixtures of complementary outer and inner constituent codes. Design examples are given to optimize mixture codes to achieve low iterative decoding thresholds on the signal-to-noise ratio of the channel  相似文献   

20.
Efficient hardware implementation of low-density parity-check (LDPC) codes is of great interest since LDPC codes are being considered for a wide range of applications. Recently, overlapped message passing (OMP) decoding has been proposed to improve the throughput and hardware utilization efficiency (HUE) of decoder architectures for LDPC codes. In this paper, we first study the scheduling for the OMP decoding of LDPC codes, and show that maximizing the throughput gain amounts to minimizing the intra- and inter-iteration waiting times. We then focus on the OMP decoding of quasi-cyclic (QC) LDPC codes. We propose a partly parallel OMP decoder architecture and implement it using FPGA. For any QC LDPC code, our OMP decoder achieves the maximum throughput gain and HUE due to overlapping, hence has higher throughput and HUE than previously proposed OMP decoders while maintaining the same hardware requirements. We also show that the maximum throughput gain and HUE achieved by our OMP decoder are ultimately determined by the given code. Thus, we propose a coset-based construction method, which results in QC LDPC codes that allow our optimal OMP decoder to achieve higher throughput and HUE.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号