首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents several results involving Fano's sequential decoding algorithm for convolutional codes. An upper bound to theath moment of decoder computation is obtained for arbitrary decoder biasBanda leq 1. An upper bound on error probability with sequential decoding is derived for both systematic and nonsystematic convolutional codes. This error bound involves the exact value of the decoder biasB. It is shown that there is a trade-off between sequential decoder computation and error probability as the biasBis varied. It is also shown that for many values ofB, sequential decoding of systematic convolutional codes gives an exponentially larger error probability than sequential decoding of nonsystematic convolutional codes when both codes are designed with exponentially equal optimum decoder error probabilities.  相似文献   

2.
Almost all the probabilistic decoding algorithms known for convolutional codes, perform decoding without prior knowledge of the error locations. Here, we introduce a novel maximum-likelihood decoding algorithm for a new class of convolutional codes named as the state transparent convolutional (STC) codes, which due to their properties error detection and error locating is possible prior to error correction. Hence, their decoding algorithm, termed here as the STC decoder, allows an error correcting algorithm to be applied only to the erroneous portions of the received sequence referred to here as the error spans (ESPs). We further prove that the proposed decoder, which locates the ESPs and applies the Viterbi algorithm (VA) only to these portions, always yields a decoded path in trellis identical to the one generated by the Viterbi decoder (VD). Due to the fact that the STC decoder applies the VA only to the ESPs, hence percentage of the single-stage (per codeword) trellis decoding performed by the STC decoder is considerably less than the VD, which is applied to the entire received sequence and this reduction is overwhelming for the fading channels, where the erroneous codewords are mostly clustered. Furthermore, through applying the VA only to the ESPs, the resulting algorithm can be viewed as a new formulation of the VD for the STC codes that analogous to the block decoding algorithms provides a predecoding error detection and error locating capabilities, while performing less single-stage trellis decoding.  相似文献   

3.
This paper presents a methodology designed to improve the effectiveness of a non-iterative decision feedback (DF) receiver/decoder for IS-95 Code Division Multiple Access (CDMA) uplink in additive white Gaussian noise (AWGN) and Rayleigh fading. The effectiveness of the DF receiver/decoder is linked to the interleaver specification and the decoding delay of the convolutional decoder. Using sub-optimal convolutional decoding the average decoding delay is reduced resulting in more effective decision feedback decoding (DFD). Simulation results of average decoding delay, bit error rate (BER) and frame error rate (FER) are presented for coherent and noncoherent detection of unfaded single-path and Rayleigh fading multipath signals. Instead of the usual performance degradation these results show that the DF receiver/decoder benefits from some forms of sub-optimal Viterbi decoding. The additional performance gain can further improve the quality of service and/or capacity of a cellular IS-95 system.  相似文献   

4.
Maximum likelihood (ML) decoding of short constraint length convolutional codes became feasible with the invention of the Viterbi decoder. Several authors have since upper bounded the performance of ML decoders. A method to calculate the event error probability of an ML decoder for convolutional codes is described.  相似文献   

5.
Collaborative decoding is an approach that can achieve diversity and combining gain by exchanging decoding information among a cluster of physically separated receivers. On AWGN channels, the least-reliable-bits (LRB) exchange scheme can achieve performance close to equal-gain combining (EGC) of all received symbols from all receivers, while reducing the amount of information that must be exchanged. In this paper, we analyze the error performance of collaborative decoding with the LRB exchange scheme when nonrecursive convolutional codes are used. The analysis is based on the observation that the extrinsic information generated in the collaborative decoding of these convolutional codes can be approximated by Gaussian random variables. A density-evolution model based on a single maximum a posteriori decoder is used to obtain the statistical characteristics of the extrinsic information. With the statistical knowledge of the extrinsic information, we develop an approximate upper bound for the error performance of the collaborative decoding process. Numerical results show that our analysis gives very good predictions of the bit error rate for collaborative decoding with LRB exchange. At high signal-to-noise ratios collaborative decoding with properly chosen parameters can achieve the same error performance as EGC of all received symbols from all receiving nodes.  相似文献   

6.
A blank-correction error-detection decoder can be used to give any desired degree of decoding from simple error detection to essentially maximum likelihood error correction. Failsafe per-refinance is obtained if a fixed number of digits are erased before decoding with maximum failsafe protection obtained with a zero threshold detector preceeding the decoder which results in error detection only. In agreement with theory there is a nonzero threshold which maximizes the rate of transmission for any SNR. Maximum likelihood decoding is obtained by iterative decoding with successively increasing thresholds as long as errors are detected. Comparison of the performance of the decoder in a failsafe mode and a maximum likelihood mode are given.  相似文献   

7.
This paper is the last part of a three-part series on threshold decoding new convolutional codes. The first part of this paper concludes the list with new rate 1/2 codes. For these codes a characteristic equation, in terms of the number of correctable errors and code constraint length, is derived by least square approximations. The second part of the paper is concerned with the usefulness of the codes derived including those in Parts I and II. Based upon the uncorrectable error statistics of the decoder, the concatenation of such codes is suggested. The characteristic of this class of codes with the property of not producing additional and bursty errors at the output of the decoder when the capability of the decoder is exceeded, is not shared by most powerful decoders such as Reed-Solomon, BCH, and Viterbi.  相似文献   

8.
Straightforward implementation of a maximum likelihood decoder implies a complexity that grows algebraically with the inverse of error probability. Forney has suggested an approach, concatenation, for which error probability decreases exponentially with increasing complexity. This paper presents the results of an evaluation of a particular concatenation system, structurally similar to the hybrid system of Falconer, employing a Reed-Solomon outer code and an inner convolutional code. The inner decoder is a Viterbi decoder of constraint length less than the corresponding encoding constraint length (nonmaximum likelihood). The outer decoder assumes one of three possible forms, all employing the likelihood information developed by the inner decoder to assist in outer decoding. Error corrections and erasure fill-ins achieved by the outer decoder are fed back to the inner decoder. Performance is evaluated through computer simulation. The three outer decoders are found to provide approximately the same performance, all yielding low error probabilities at rates somewhat above Rcompof sequential decoding and at signal energy to noise density ratios per information bit around 1.7 dB.  相似文献   

9.
The reverse link encoding steps of the IS95 cellular code-division multiple-access (CDMA) standard consist of convolutional encoding, block interleaving, and orthogonal Walsh function encoding. Deinterleaving individual symbol metrics obtained from the Walsh function matched filters followed by conventional Viterbi decoding produces suboptimal results, as unwanted intersymbol Walsh function correlation is introduced. We propose a combined deinterleaver/decoder with improved performance over existing decoders with little added overhead and no extra decoding delay. Applied to the IS95 reverse link, the proposed decoder has about 1.0-dB gain over soft-decision decoding with interleaved symbol metrics at a bit error rate (BER) of 10-3  相似文献   

10.
Potentially large storage requirements and long initial decoding delays are two practical issues related to the decoding of low-density parity-check (LDPC) convolutional codes using a continuous pipeline decoder architecture. In this paper, we propose several reduced complexity decoding strategies to lessen the storage requirements and the initial decoding delay without significant loss in performance. We also provide bit error rate comparisons of LDPC block and LDPC convolutional codes under equal processor (hardware) complexity and equal decoding delay assumptions. A partial syndrome encoder realization for LDPC convolutional codes is also proposed and analyzed. We construct terminated LDPC convolutional codes that are suitable for block transmission over a wide range of frame lengths. Simulation results show that, for terminated LDPC convolutional codes of sufficiently large memory, performance can be improved by increasing the density of the syndrome former matrix.  相似文献   

11.
基于前向多层神经网络的分组码译码器设计   总被引:1,自引:0,他引:1  
把最大相关译码与神经网络神经元的内积特性及吸引域有机地联系起来,连接权决定译码码字,阈值设定决定神经元的纠错范围,从而形成一种可用于硬判决及软判决译码的神经译码器,并在理论上证明了此译码器可在DMC信道的纠错能力范围内实现零错误概率硬判决译码,也可实现与最小欧几里德距离译码相当的软判决译码,并能在检错范围内检错。  相似文献   

12.
Concatenated decoding with a reduced-search BCJR algorithm   总被引:5,自引:0,他引:5  
We apply two reduced-computation variants of the BCJR algorithm to the decoding of serial and parallel concatenated convolutional codes. One computes its recursions at only M states per trellis stage; one computes only at states with values above a threshold. The threshold scheme is much more efficient, and it greatly reduces the computation of the BCJR algorithm. By computing only when the channel demands it, the threshold scheme reduces the turbo decoder computation to one-four nodes per trellis stage after the second iteration  相似文献   

13.
基于长期演进(LTE)的Tail—biting卷积码,介绍了维特比译码算法,它是一种最优的卷积码译码算法。由于Tail—biting卷积码的循环特性,采用固定延迟译码的方法,降低了译码复杂度。通过使用全并行的结构及简单的回溯存储方法,设计了一个具有高速和低复杂度的固定延迟译码器。在FPGA上实现并验证,验证结果表明译码器的性能满足了LTE系统的要求。  相似文献   

14.
The structures of convolutional self-orthogonal codes and convolutional self-doubly-orthogonal codes for both belief propagation and threshold iterative decoding algorithms are analyzed on the basis of difference sets and computation tree. It is shown that the double orthogonality property of convolutional self-doubly-orthogonal codes improves the code structure by maximizing the number of independent observations over two successive decoding iterations while minimizing the number of cycles of lengths$6$and$8$on the code graphs. Thus, the double orthogonality may improve the iterative decoding in both convergence speed and error performance. In addition, the double orthogonality makes the computation tree rigorously balanced. This allows the determination of the best weighing technique, so that the error performance of the iterative threshold decoding algorithm approaches that of the iterative belief propagation decoding algorithm, but at a substantial reduction of the implementation complexity.  相似文献   

15.
Path pruning, a new coding concept to achieve free distance enlargement for convolutional codes, is proposed. Through path pruning, every convolutional code can be used for unequal error protection (UEP), no matter whether it is originally a UEP code. To avoid undesired path discontinuity and reduce possible path distance loss, a cascaded implementation together with a path-compatible criterion is proposed, under which path-compatible pruned convolutional (PCPC) codes are constructed. Necessary and sufficient conditions are also derived for a subclass of PCPC codes whose decoding can be done by a single decoder for the parent code. Finally, some PCPC codes with good UEP capabilities found by computer search are given  相似文献   

16.
We consider the structure and performance of a multistage decoding scheme for an internally bandwidth efficient convolutionally coded Poisson fiber-optic code division multiple access (CDMA) communication system. The decoder is implemented electronically in several stages in which in each stage, the interfering users' coded bit decisions obtained in the previous stage is applied for computing the likelihood of the coded symbols of the desired user. The first stage is a soft-input Viterbi decoder for the internally coded scheme, in which the soft-input coded symbol likelihood values are computed by considering the multiuser interference as a noise signal. The likelihood of coded symbol computed in each stage is then entered into the convolutional decoder for the next bit decisions. The convolutional codes that are used for demonstrating the performance of the multistage decoder are super orthogonal codes (SOCs). We derive the bit error rates (BERs) of the proposed decoder for internally coded Poisson fiber-optic CDMA systems using optical orthogonal codes (OOCs) along with both ON-OFF keying (OOK) and binary pulse position modulation (BPPM) schemes. Our numerical results indicate that the proposed decoding scheme substantially outperforms the single-stage soft-input Viterbi decoder. We also derive the upper bound on the probability of error of a decoder for the known interference case, which is the ultimate performance of a multiuser decoder, and compare the result with that of the soft-input Viterbi decoder.  相似文献   

17.
In hardware implementations of many signal processing functions, timing errors on different circuit signals may have largely different importance with respect to the overall signal processing performance. This motivates us to apply the concept of unequal error tolerance to enable the use of voltage overscaling at minimal signal processing performance degradation. Realization of unequal error tolerance involves two main issues, including how to quantify the importance of each circuit signal and how to incorporate the importance quantification into signal processing circuit design. We developed techniques to tackle these two issues and applied them to two types of trellis decoders including Viterbi decoder for convolutional code decoding and Max-Log-Maximum A Posteriori (MAP) decoder for Turbo code decoding. Simulation results demonstrated promising energy saving potentials of the proposed design solution on both trellis decoding computation and memory storage at small decoding performance degradation.   相似文献   

18.
《Electronics letters》1991,27(12):1111-1112
A threshold decoder of the well known convolutional code (2,1,6) is proposed. Two simple approaches to reduce error propagation are presented. This decoder can be used when the communication channel does not require a more efficient and expensive decoder such as the Viterbi decoder.<>  相似文献   

19.
卷积码作为一种重要的前向纠错信道编码方式,广泛应用于现代无线通信系统之中。Viterbi译码方式在约束长度较小的前提下能够最大限度地发挥出卷积码的优异性能。对(2,1,5)最佳非系统卷积码的Viterbi译码器的误码率进行了Matlab仿真。针对传统Viterbi译码设计上的不足进行了改进和优化,给出了硬件实现的逻辑原理框图,并利用EDA设计工具基于FPGA来设计实现Viterbi译码模块。最后分析了译码器综合后的资源占用情况并通过时序仿真验证了译码可靠性。  相似文献   

20.
A stochastic model is described for the decoder of an optimal burst-correcting convolutional code. From this model, an upper bound is obtained forbar{p}, the error probability per word after decoding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号