首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Redundancy of universal codes for a class of sources determines by how much the actual code length exceeds the optimal code length. In the minimax scenario, one designs the best code for the worst source within the class. Such minimax redundancy comes in two flavors: average minimax or worst case minimax. We study the worst case minimax redundancy of universal block codes for Markovian sources of any order. We prove that the maximal minimax redundancy for Markov sources of order r is asymptotically equal to 1/2m/sup r/(m-1)log/sub 2/n+log/sub 2/A/sub m//sup r/-(lnlnm/sup 1/(m-1)/)/lnm+o(1), where n is the length of a source sequence, m is the size of the alphabet, and A/sub m//sup r/ is an explicit constant (e.g., we find that for a binary alphabet m=2 and Markov of order r=1 the constant A/sub 2//sup 1/=16/spl middot/G/spl ap/14.655449504 where G is the Catalan number). Unlike previous attempts, we view the redundancy problem as an asymptotic evaluation of certain sums over a set of matrices representing Markov types. The enumeration of Markov types is accomplished by reducing it to counting Eulerian paths in a multigraph. In particular, we propose exact and asymptotic formulas for the number of strings of a given Markov type. All of these findings are obtained by analytic and combinatorial tools of analysis of algorithms.  相似文献   

2.
3.
A source matching approach is presented to the problem of finding minimax cedes for classes of memoryless sources. The approach leads to a channel capacity problem so that Blahut's algorithm can be used to find approximations to the minimax code. Closed form solutions are presented for the class of monotonic sources and for a class of Bernoulli-like sources. For extensions of finite alphabet memoryless sources, a modified Lynch-Davisson code has performance close to that of the minimax code. The exact solution to the source matching problem and the resulting codes are presented for the extensions of binary codes up to blocklength 31.  相似文献   

4.
Although the existence of universal noiseless variable-rate codes for the class of discrete stationary ergodic sources has previously been established, very few practical universal encoding methods are available. Efficient implementable universal source coding techniques are discussed in this paper. Results are presented on source codes for which a small value of the maximum redundancy is achieved with a relatively short block length. A constructive proof of the existence of universal noiseless codes for discrete stationary sources is first presented. The proof is shown to provide a method for obtaining efficient universal noiseless variable-rate codes for various classes of sources. For memoryless sources, upper and lower bounds are obtained for the minimax redundancy as a function of the block length of the code. Several techniques for constructing universal noiseless source codes for memoryless sources are presented and their redundancies are compared with the bounds. Consideration is given to possible applications to data compression for certain nonstationary sources.  相似文献   

5.
Trellis source codes consist of a finite-state machine decoder and a trellis search algorithm, such as the Viterbi algorithm, as the encoder. The encoder experiments with a local copy of the decoder and determines the best channel path map in the sense that it will yield the smallest average distortion between the source sequence and the reproduction sequence given the codebook. In this paper we present a coding system and a design algorithm for predictive trellis coding. Results obtained via simulation are compared for trellis and predictive trellis codes designed for first-order autoregressive sources with Gaussian and Laplacian innovations and for sampled speech. On a random source which models speech, simulation results of the predictive and nonpredictive trellis codes designed by the generalized Lloyd algorithm and those obtained by other researchers are compared. Issues related to computational complexity, the effects of initial codebook selection, training sequence segmentation, search length, channel errors, and algorithm convergence are addressed.  相似文献   

6.
Average case universal compression of independent and identically distributed (i.i.d.) sources is investigated, where the source alphabet is large, and may be sublinear in size or even larger than the compressed data sequence length n. In particular, the well-known results, including Rissanen's strongest sense lower bound, for fixed-size alphabets are extended to the case where the alphabet size k is allowed to grow with n. It is shown that as long as k=o(n), instead of the coding cost in the fixed-size alphabet case of 0.5logn extra code bits for each one of the k-1 unknown probability parameters, the cost is now 0.5log(n/k) code bits for each unknown parameter. This result is shown to be the lower bound in the minimax and maximin senses, as well as for almost every source in the class. Achievability of this bound is demonstrated with two-part codes based on quantization of the maximum-likelihood (ML) probability parameters, as well as by using the well-known Krichevsky-Trofimov (KT) low-complexity sequential probability estimates. For very large alphabets, kGtn, it is shown that an average minimax and maximin bound on the redundancy is essentially (to first order) log(k/n) bits per symbol. This bound is shown to be achievable both with two-part codes and with a sequential modification of the KT estimates. For k=Theta(n), the redundancy is Theta(1) bits per symbol. Finally, sequential codes are designed for coding sequences in which only m相似文献   

7.
8.
9.
针对现有RS码识别算法需要对码字符号在不同域之间进行转化,且容错性能较差的问题,该文提出一种直接利用软判决序列完成RS码识别算法。算法首先从RS码定义出发,给出了RS码校验关系从GF(2m)到GF(2)上的等价转换方式,从而避免了不同域下复杂的符号转化;其次引入了能够衡量校验关系成立大小的平均校验符合度概念,然后基于其统计特性以及极大极小判决准则,遍历可能的码长以及对应的m级本原多项式,进行初始码根校验匹配,从而完成码长以及本原多项式识别;最后利用识别出的码长以及本原多项式,构建本原多项式下GF(2m),进行连续码根匹配判决,最终完成码生成多项式识别。仿真结果表明:推导的平均校验符合度统计特性与实际情况一致,算法能在低信噪比下有效完成参数识别;同时该算法具有较好的低信噪比适应能力,在信噪比为6 dB条件下,工程中常见的RS码识别率均能达到90%以上。与现有算法相比,该文算法性能明显好于硬判决算法,且比传统算法提升1 dB以上性能。  相似文献   

10.
The minimum number of codewords in a binary code with length n and covering radius R is denoted by K(n,R), and corresponding codes are called optimal. A code with M words is said to be balanced in a given coordinate if the number of 0's and 1's in this coordinate are at least /spl lfloor/M/2/spl rfloor/. A code is balanced if it is balanced in all coordinates. It has been conjectured that among optimal covering codes with given parameters there is at least one balanced code. By using a computational method for classifying covering codes, it is shown that there is no balanced code attaining K(9,1)=62.  相似文献   

11.
A class of pseudocyclic multilevel single-error-correcting code is described, and an example of a 5-level code is given. Some of these codes have parameters. such as block length, number of parity-check digits etc., which are unrealisable with strictly cyclic codes. Pseudocyclic codes share the mathematical structure of cyclic codes, and thus are relatively simple to implement in practical form  相似文献   

12.
Recent years have seen a resurgence of interest in redundancy of lossless coding. The redundancy (regret) of universal fixed-to-variable length coding for a class of sources determines by how much the actual code length exceeds the optimal (ideal over the class) code length. In a minimax scenario one finds the best code for the worst source either in the worst case (called also maximal minimax) or on average. We first study the worst case minimax redundancy over a class of stationary ergodic sources and replace Shtarkov's bound by an exact formula. Among others, we prove that a generalized Shannon code minimizes the worst case redundancy, derive asymptotically its redundancy, and establish some general properties. This allows us to obtain precise redundancy for memoryless, Markov, and renewal sources. For example, we present the exact constant of the redundancy for memoryless and Markov sources by showing that the integer nature of coding contributes log(logm/(m-1))/logm+o(1) where m is the size of the alphabet. Then we deal with the average minimax redundancy and regret. Our approach here is orthogonal to most recent research in this area since we aspire to show that asymptotically the average minimax redundancy is equivalent to the worst case minimax redundancy for some classes of sources. After formulating some general bounds relating these two redundancies, we prove our assertion for memoryless and Markov sources. Nevertheless, we provide evidence that maximal redundancy of renewal processes does not have the same leading term as the average minimax redundancy (however, our general results show that maximal and average regrets are asymptotically equivalent).  相似文献   

13.
The authors propose a class of spherical codes which can be easily decoded by an efficient iterative maximum likelihood decoding algorithm. A necessary and sufficient condition for a spherical code to be iteratively maximum likelihood decodable is formulated. A systematic construction method for such codes based on shrinking of Voronoi corners is analyzed. The base code used for construction is the binary maximal length sequence code. The second-level construction is described. Computer simulation results for selected codes constructed by the proposed method are given  相似文献   

14.
A universal variable-to-fixed length algorithm for binary memoryless sources which converges to the entropy of the source at the optimal rate is known. We study the problem of universal variable-to-fixed length coding for the class of Markov sources with finite alphabets. We give an upper bound on the performance of the code for large dictionary sizes and show that the code is optimal in the sense that no codes exist that have better asymptotic performance. The optimal redundancy is shown to be H log log M/log M where H is the entropy rate of the source and M is the code size. This result is analogous to Rissanen's (1984) result for fixed-to-variable length codes. We investigate the performance of a variable-to-fixed coding method which does not need to store the dictionaries, either at the coder or the decoder. We also consider the performance of both these source codes on individual sequences. For individual sequences we bound the performance in terms of the best code length achievable by a class of coders. All the codes that we consider are prefix-free and complete  相似文献   

15.
肖扬  黄希  王铠尧  范俊 《信号处理》2010,26(7):1050-1054
尽管LDPC码已经被GB20600标准采纳作为信道编码,与其它LDPC码相比,在同样码长和码率的情况下,GB20600 LDPC码误码率性能并非最佳;GB20600标准的LDPC码的码长达7493,存在编码复杂性问题,但是GB20600 LDPC码未采用基于校验矩阵的快速算法,这给GB20600 LDPC编解码器的硬件实现带来较大的困难。本文在现有GB20600 LDPC码的设计框架下,对GB20600中LDPC码的校验矩阵进行了修改,在此基础上提出一种有效的LDPC码的快速迭代算法,使编解码器的硬件易于实现。改进后的LDPC码的编码算法具有较低的实现复杂度。仿真结果表明,改进后的LDPC码的误包率性能优于现GB20600中LDPC码的误包率性能。   相似文献   

16.
To decode a long block code with a large minimum distance by maximum likelihood decoding is practically impossible because the decoding complexity is simply enormous. However, if a code can be decomposed into constituent codes with smaller dimensions and simpler structure, it is possible to devise a practical and yet efficient scheme to decode the code. This paper investigates a class of decomposable codes, their distance and structural properties. It is shown that this class includes several classes of well-known and efficient codes as subclasses. Several methods for constructing decomposable codes or decomposing codes are presented. A two-stage (soft-decision or hard-decision) decoding scheme for decomposable codes, their translates or unions of translates is devised, and its error performance is analyzed for an AWGN channel. The two-stage soft-decision decoding is suboptimum. Error performances of some specific decomposable codes based on the proposed two-stage soft-decision decoding are evaluated. It is shown that the proposed two-stage suboptimum decoding scheme provides an excellent trade-off between the error performance and decoding complexity for codes of moderate and long block length  相似文献   

17.
A class of binary quasi-cyclic burst error-correcting codes based upon product codes is studied. An expression for the maximum burst error-correcting capability for each code in the class is given. In certain cases, the codes exist in the class which have the same block length and number of check bits as the Gilbert codes, but correct longer bursts of errors than Gilbert codes. By shortening the codes, it is possible to design codes which achieve the Reiger bound  相似文献   

18.
The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases.  相似文献   

19.
A possibility of estimating the finite-length performance of sparse-graph code ensembles gives two opportunities: to compare different codes of the same length in a context very close to real, practical applications and to perform the parameter optimization for a given code length [2]. We need a finite-length approximation that is valid for any code ensemble. The scaling approach seems to be a tool, general enough to provide such an approximation. However, the analytical derivation of parameters of the scaling approximation has been successful only for LDPC codes [1]; despite several attempts [25], [20], no such result was proposed for other code ensembles. In this paper, we focus on the finite-length performance of turbo-like codes, by applying the scaling approach to this case. In particular, by assuming the transmission over the binary erasure channel, we conjecture the scaling law and derive its scaling parameter. As examples, we present the performance estimation for Repeat-Accumulate codes [11], parallel turbo codes [8] and TLDPC codes [5], in all cases matching well the numerical results.  相似文献   

20.
This paper is concerned with two problems in the theory of source coding subject to a maximum average distortion constraint. The first problem involves the coding of a nonergodic discrete time source and the second involves coding for a class of ergodic discrete time sources. Coding theorems are given for both of these situations for very general source alphabets. The codes that are obtained here will, in general, be variable length codes so that average code rate and average distortion are the measures of performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号