首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
3.
4.
Two results on the coding of stationary nonergodic sources are presented. The first is a source coding theorem stating that there exist variable-rate codes with performance arbitrarily close to the rate-distortion function of the stationary nonergodic source. The second is a converse information transmission theorem. It is shown that the distortion which results when the source is transmitted across a channel with capacityCis no less than the least distortion achievable by fixed-rate codes with rateC.  相似文献   

5.
6.
Consider approximate (lossy) matching of a source string ~P, with a random codebook generated from reproduction distribution Q, at a specified distortion d. Previous work determined the minimum coding rate R1=R(P, Q, d) for this setting. We observe that for a large word length and with high probability, the matching codeword is typical with a distribution Q1 which is different from Q. If a new random codebook is generated ~Q1, then the source string will favor codewords which are typical with a new distribution Q2, resulting in a minimum coding rate R2=R(P, Q1, d), and so on. We show that the sequences of distributions Q1, Q 2,... and rates R1, R2,..., generated by this procedure, converge to an optimum reproduction distribution Q*, and the rate-distortion function R(P, d), respectively. We also derive a fixed rate-distortion slope version of this natural type selection process. In the latter case, an iteration of the process stochastically simulates an iteration of the Blahut-Arimoto (1972) algorithm for rate-distortion function computation (without recourse to prior knowledge of the underlying source distribution). To strengthen these limit statements, we also characterize the steady-state error of these procedures when iterating at a finite string length. Implications of the main results provide fresh insights into the workings of lossy variants of the Lempel-Ziv algorithm for adaptive compression  相似文献   

7.
We give a new variable-rate and variable-distortion coding scheme and show a coding theorem for time-discrete stationary sources with abstract alphabets and a single-letter fidelity criterion without assuming reference letters. The approach used to prove the coding theorem is then specialized to maximum-distortion coding and to fixed-rate coding and the corresponding coding theorems are proved for stationary sources. We also consider the possibility of extending the results to time-continuous sources. We still need a reference letter for fixed-rate coding  相似文献   

8.
For stationary discrete-time Gaussian sources and the squared-error distortion measure, a trellis source code is constructed. The encoder consists of a Karhunen-Loeve transform on the source output followed by a search on a trellis structured code, where the decoder is a time-variant nonlinear filter. The corresponding code theorem is proved using the random coding argument. The proof technique follows that of Viterbi and Omura, who proved the trellis coding theorem for memoryless sources. The resultant coding scheme is implementable and applicable at any nonzero rate to a stationary Gaussian source with a bounded and continuous power spectrum. Therefore. for stationary sources, it is more general than Berger's tree coding scheme, which is restricted to autoregressive Gaussian sources in a region of high rate (low distortion).  相似文献   

9.
A new tree code is introduced for discrete-time stationary Gaussian sources with hounded, integrable power spectra and the squared-error distortion measure. The codewords in the tree are reconstructions of Karhunen-Loève transforms of the source words. The branching factor and the number of code letters per branch may vary with level in the tree. A theorem that guarantees the existence of an optimal code for any code rate using such a tree is proved. The proof uses the random coding argument in conjunction with a theorem on survival of a branching process with random environment. A suboptimal but computationally affordable realization of the theorem's coding technique was used for encoding simulations for six autoregressive sources at rates of1.0, 0.50, 0.25, and0.10bits per source symbol. The average distortion results were generally within1dB of the distortion-rate bound but varied widely depending on the source and rate. The results were compared with those for transform quantization simulations for the same sources and rates. The tree code always performed better but only by an average of0.44dB all sources and rates. Longer source blocks and more intensive search would certainly improve the performance of the tree codes, but at the expense of extra computation and storage.  相似文献   

10.
For a stationary ergodic source, the source coding theorem and its converse imply that the optimal performance theoretically achievable by a fixed-rate or variable-rate block quantizer is equal to the distortion-rate function, which is defined as the infimum of an expected distortion subject to a mutual information constraint. For a stationary nonergodic source, however, the. Distortion-rate function cannot in general be achieved arbitrarily closely by a fixed-rate block code. We show, though, that for any stationary nonergodic source with a Polish alphabet, the distortion-rate function can be achieved arbitrarily closely by a variable-rate block code. We also show that the distortion-rate function of a stationary nonergodic source has a decomposition as the average of the distortion-rate functions of the source's stationary ergodic components, where the average is taken over points on the component distortion-rate functions having the same slope. These results extend previously known results for finite alphabets  相似文献   

11.
12.
Sliding-block codes are nonblock coding structures consisting of discrete-time time-invariant possibly nonlinear filters. They are equivalent to time-invariant trellis codes. The coupling of Forney's rigorization of Shannon's random-coding/typical-sequence approach to block coding theorems with the strong Rohlin-Kakutani Theorem of ergodic theory is used to obtain a sliding-block coding theorem for ergodic sources and discrete memoryless noisy channels. Combining this result with a theorem on sliding-block source coding with a fidelity criterion yields a sliding-block information transmission theorem. Thus, the basic existence theorems of information theory hold for stationary nonblock structures, as well as for block codes.  相似文献   

13.
For arbitrary alphabets and single-letter fidelity criteria, two theorems are given which allow any fixed-rate or variable-rate source coding theorem for block codes to be extended to sliding-block codes. Applications are given to universal coding and to the coding of a stationary nonergodic source.  相似文献   

14.
We characterize the best achievable performance of lossy compression algorithms operating on arbitrary random sources, and with respect to general distortion measures. Direct and converse coding theorems are given for variable-rate codes operating at a fixed distortion level, emphasizing: (a) nonasymptotic results, (b) optimal or near-optimal redundancy bounds, and (c) results with probability one. This development is based in part on the observation that there is a precise correspondence between compression algorithms and probability measures on the reproduction alphabet. This is analogous to the Kraft inequality in lossless data compression. In the case of stationary ergodic sources our results reduce to the classical coding theorems. As an application of these general results, we examine the performance of codes based on mixture codebooks for discrete memoryless sources. A mixture codebook (or Bayesian codebook) is a random codebook generated from a mixture over some class of reproduction distributions. We demonstrate the existence of universal mixture codebooks, and show that it is possible to universally encode memoryless sources with redundancy of approximately (d/2) log n bits, where d is the dimension of the simplex of probability distributions on the reproduction alphabet.  相似文献   

15.
16.
17.
In this correspondence, we are interested in the error exponent of fixed-length lossy source coding, where the sources are general sources, in the sense of Han and Verduacute, including all nonstationary and/or nonergodic sources. The aim of the correspondence is to establish a unified formula for the minimum (D, r)-achievable rate which is the minimum achievable coding rate under asymptotic constraints of the form epsivn(D) ~ e-nr (n rarr infin), where r is the prescribed error exponent, epsivn(D) is the probability of the distortion exceeding a level D, and n is the code-length. For the stationary memoryless source with finite alphabet, Marton (1974) obtained a formula for the reliability function which is expressed in terms of the minimum (D,r)- achievable rate. Recently, Ihara and Kubo proved that the Marton's formula remains true for the stationary memoryless Gaussian source under a mean-squared fidelity criterion. In this correspondence, it is shown that a formula similar to Marton's formula holds for the general sources. The error exponent of correct decoding is also investigated and a formula for the minimum achievable rate of correct decoding in lossy coding is established  相似文献   

18.
The redundancy problem of lossy source coding with abstract source and reproduction alphabets is considered. For coding at a fixed rate level, it is shown that for any fixed rate R>0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {Cn}n=1 of block codes at the rate R such that the distortion redundancy of Cn (defined as the difference between the performance of Cn and the distortion rate function d(P, R) of P) is upper-bounded by |(∂d(P,R))/(∂R)| ln n/2n+o(ln n/n). For coding at a fixed distortion level, it is demonstrated that for any d>0 and any memoryless abstract alphabet source P satisfying some mild conditions, there exists a sequence {Cn}n=1 of block codes at the fixed distortion d such that the rate redundancy of Cn (defined as the difference between the performance of C n and the rate distortion function R(P,d) of P) is upper-bounded by (7ln n)/(6n)+o(ln n/n). These results strengthen the traditional Berger's (1968, 1971) abstract alphabet source coding theorem, and extend the positive redundancy results of Zhang, Yang, and Wei (see ibid., vol.43, no.1, p.71-91, 1997, and ibid., vol.42, p.803-21, 1996) on lossy source coding with finite alphabets and the redundancy result of Wyner (see ibid., vol.43, p.1452-64, 1997) on block coding of memoryless Gaussian sources  相似文献   

19.
An expression is obtained for the optimum distortion theoretically attainable when an information source with finite alphabet is encoded at a fixed rate with respect to a single-letter fidelity criterion. The expression is demonstrated by means of an appropriate coding theorem and converse; This new result generalizes the coding theorem of Shannon for stationary ergodic sources, the coding theorem of Gray-Davisson for stationary nonergodic sources, that of Gray-Saadat for asymptotically mean stationary sources, and that of Ziv for an individual sequence.  相似文献   

20.
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号