首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 413 毫秒
1.
2.
We consider encoding of a source with pre-specified second-order statistics, but otherwise arbitrary, by entropy-coded dithered (lattice) quantization (ECDQ) incorporating linear pre- and post-filters. In the design and analysis of this scheme we utilize the equivalent additive-noise channel model of the ECDQ. For Gaussian sources and a square error distortion measure, the coding performance of the pre/post filtered ECDQ approaches the rate-distortion function, as the dimension of the (optimal) lattice quantizer becomes large; actually, in this case the proposed coding scheme simulates the optimal forward channel realization of the rate-distortion function. For non-Gaussian sources and finite-dimensional lattice quantizers, the coding rate exceeds the rate-distortion function by at most the sum of two terms: the “information divergence of the source from Gaussianity” and the “information divergence of the quantization noise from Gaussianity”. Additional bounds on the excess rate of the scheme from the rate-distortion function are also provided  相似文献   

3.
Quantum distortion operator is introduced based on canonical operators. As the lower bound of quantum information rate distortion, the entanglement information rate distortion is achieved by Gaussian map for Gaussian source. The entanglement information rate-distortion function is calculated for one-mode Gaussian source, it is achieved by a nontrivial unitary transformation. The rate distortion is achievable at zero distortion point. In contrast to the distortion defined via fidelity, our definition of the distortion makes it possible to calculate the entanglement information rate-distortion function for Gaussian source.  相似文献   

4.
We formulate quantum rate-distortion theory in the most general setting where classical side information is included in the tradeoff. Using a natural distortion measure based on entanglement fidelity and specializing to the case of an unrestricted classical side channel, we find the exact quantum rate-distortion function for a source of isotropic qubits. An upper bound we believe to be exact is found in the case of biased sources. We establish that in this scenario optimal rate-distortion codes produce no entropy exchange with the environment of any individual qubit  相似文献   

5.
Multiterminal source coding refers to separate encoding and joint decoding of multiple correlated sources. Joint decoding requires all the messages to be decoded simultaneously which is exponentially more complex than a sequence of single-message decodings. Inspired by previous work on successive coding, we apply the successive Wyner-Ziv coding, which is inherently a low complexity approach of obtaining a prescribed distortion, to the two-terminal source coding scheme. First, we consider 1-helper problem where one source provides partial side information to the decoder to help the reconstruction of the main source. Our results show that the successive coding strategy is an optimal strategy in the sense of achieving the rate-distortion function. By developing connections between source encoding and data fusion steps, it is shown that the whole rate-distortion region for the 2-terminal source coding problem is achievable using the successive coding strategy. Comparing the performance of the sequential coding with the performance of the successive coding, we show that there is no sum-rate loss when the side information is not available at the encoder. This result is of special interest in some applications such as video coding where there are processing and storage constraints at the encoder. Finally, we provide an achievable rate-distortion region for the m-terminal source coding.
M. Reza SoleymaniEmail:
  相似文献   

6.
Consider a vector quantizer that is equipped with N side information bits of an arbitrary representation of the statistics of the input source. We investigate the minimum value of N such that rate-distortion performance of this quantizer would be essentially the same as the optimum quantizer for the given source  相似文献   

7.
Constraining the reproduction alphabet to be of small size in encoding continuous-amplitude memoryless sources has been shown to give very small degradation from the ideal performance of the rate-distortion bound. The optimum fixed-size reproduction alphabet and its individual letter probabilities are required in order to encode the source with performance approaching that of theory. These can be found through a somewhat lengthy, but convergent, algorithm. Given reasonably chosen fixed sets of reproduction letters and/or their probabilities, we define new rate-distortion functions which are coding bounds under these alphabet constraints. We calculate these functions for the Gaussian and Laplacian sources and the squared-error distortion measure and find that performance near the rate-distortion bound is achievable using a reproduction alphabet consisting of a small number of optimum quantizer levels.  相似文献   

8.
We consider "bit stealing" scenarios where the rate of a source code must be reduced without prior planning. We first investigate the efficiency of source requantization to reduce rate, which we term successive degradation. We focus on finite-alphabet sources with arbitrary distortion measures as well as the Gaussian-quadratic and high-resolution scenarios. We show an achievable rate-distortion tradeoff and prove that this is the best guaranteeable tradeoff for any good source code. This tradeoff is in general different from the rate-distortion tradeoff with successive refinement, where there is prior planning. But, we show that with quadratic distortion measures, for all sources with finite differential entropy and at least one finite moment, the gap is at most 1/2 bit or 3 dB in the high-resolution limit. In the Gaussian-quadratic case, the gap is at most 1/2 bit for all resolutions. We further consider bit stealing in the form of information embedding, whereby an embedder acts on a quantized source and produces an output at the same rate and in the original source codebook. We develop achievable distortion-rate tradeoffs. Two cases are considered, corresponding to whether or not the source decoder is informed of the embedding rate. In the Gaussian-quadratic case, we show the informed decoder need only augment the regular decoder with simple post-reconstruction distortion compensation in the form of linear scaling for the resulting system to be as efficient as bit stealing via successive degradation. Finally, we show that the penalty for uninformed versus informed decoders is at most 3 dB or 0.21-bit in the Gaussian-quadratic case and that their performance also lies within the 1/2-bit gap to that of successive refinement.  相似文献   

9.
A binary symmetric source with binary side information is given. An encoder codes the source for data compression with no knowledge of the side information. It is then decoded, perhaps with and perhaps without the presence of side information. The rate-distortion function of this scheme is a function of two variables:D_{1}is the distortion when side information is present at the decoder, and D2 is the distortion when side information is absent at the decoder. The rate-distortion function is shown to reduce to previously solved problems in much of the(D_{1}, D_{2})-plane. Tight upper and lower bounds are found for the rate-distortion function in the rest of the(D_{1}, D_{2})-plane.  相似文献   

10.
A new multiterminal source coding problem called the CEO problem was presented and investigated by Berger, Zhang, and Viswanathan. Recently, Viswanathan and Berger have investigated an extension of the CEO problem to Gaussian sources and call it the quadratic Gaussian CEO problem. They considered this problem from a statistical viewpoint, deriving some interesting results. In this paper, we consider the quadratic Gaussian CEO problem from a standpoint of multiterminal rate-distortion theory. We regard the CEO problem as a certain multiterminal remote source coding problem with infinitely many separate encoders whose observations are conditionally independent if the remote source is given. This viewpoint leads us to a complete solution to the problem. We determine the tradeoff between the total amount of rate and squared distortion, deriving an explicit formula of the rate-distortion function. The derived function has the form of a sum of two nonnegative functions. One is a classical rate-distortion function for single Gaussian source and the other is a new rate-distortion function which dominates the performance of the system for a relatively small distortion. It follows immediately from our result that the conjecture of Viswanathan and Berger on the asymptotic behavior of minimum squared distortion for large rates is true  相似文献   

11.
Gaussian multiterminal source coding   总被引:4,自引:0,他引:4  
In this paper, we consider the problem of separate coding for two correlated memoryless Gaussian sources. We determine the rate-distortion region in the special case when one source provides partial side information to the other source. We also show that the previously obtained inner region of the rate-distortion region is partially tight. A rigorous proof of the direct coding theorem is also given  相似文献   

12.
Computation of channel capacity and rate-distortion functions   总被引:2,自引:0,他引:2  
By defining mutual information as a maximum over an appropriate space, channel capacities can be defined as double maxima and rate-distortion functions as double minima. This approach yields valuable new insights regarding the computation of channel capacities and rate-distortion functions. In particular, it suggests a simple algorithm for computing channel capacity that consists of a mapping from the set of channel input probability vectors into itself such that the sequence of probability vectors generated by successive applications of the mapping converges to the vector that achieves the capacity of the given channel. Analogous algorithms then are provided for computing rate-distortion functions and constrained channel capacities. The algorithms apply both to discrete and to continuous alphabet channels or sources. In addition, a formalization of the theory of channel capacity in the presence of constraints is included. Among the examples is the calculation of close upper and lower bounds to the rate-distortion function of a binary symmetric Markov source.  相似文献   

13.
Information rates for nonhomogeneous Poisson counting processes are derived. A source coding theorem is proved and rate-distortion functions are obtained for the reconstruction of the sample functions of the latter sources. Distortion measures which depend upon the magnitude of the error as well as on a temporal weighting function proportional to the instantaneous intensity of the source are assumed. The analysis utilizes recent results concerning information rates for homogeneous Poisson processes and appropriate time-scale transformations, which map the source sample functions into an isometric induced source.  相似文献   

14.
We investigate the problem of guessing a random vector X within distortion level D. Our aim is to characterize the best attainable performance in the sense of minimizing, in some probabilistic sense, the number of required guesses G(X) until the error falls below D. The underlying motivation is that G(X) is the number of candidate codewords to be examined by a rate-distortion block encoder until a satisfactory codeword is found. In particular, for memoryless sources, we provide a single-letter characterization of the least achievable exponential growth rate of the ρth moment of G(X) as the dimension of the random vector X grows without bound. In this context, we propose an asymptotically optimal guessing scheme that is universal both with respect to the information source and the value of ρ. We then study some properties of the exponent function E(D, ρ) along with its relation to the source-coding exponents. Finally, we provide extensions of our main results to the Gaussian case, guessing with side information, and sources with memory  相似文献   

15.
We show that for every n>2 the standard nth-order approximation Rn(D), to the rate-distortion function of the binary-symmetric Markov source (BSMS) is not successively refineable under the Hamming distortion measure in an open interval of the form D nmax  相似文献   

16.
We consider both channel coding and source coding, with perfect past feedback/feedforward, in the presence of side information. It is first observed that feedback does not increase the capacity of the Gel'fand-Pinsker channel, nor does feedforward improve the achievable rate-distortion performance in the Wyner-Ziv problem. We then focus on the Gaussian case showing that, as in the absence of side information, feedback/feedforward allows to efficiently attain the respective performance limits. In particular, we derive schemes via variations on that of Schalkwijk and Kailath. These variants, which are as simple as their origin and require no binning, are shown to achieve, respectively, the capacity of Costa's channel, and the Wyner-Ziv rate distortion function. Finally, we consider the finite-alphabet setting and derive schemes for both the channel and the source coding problems that attain the fundamental limits, using variations on schemes of Ahlswede and Ooi and Wornell, and of Martinian and Wornell, respectively  相似文献   

17.
A rate-distortion theory is introduced for the optimal encoding of stationary memoryless continuous-amplitude sources with a single-letter distortion measure and reproduction alphabets of a given finite size. The theory arises from a judicious approximation of the original continuous-input discrete-output problem by one with discrete input and output. A size-constrained output alphabet rate-distortion function is defined, its coding significance is established by coding theorems, and a convergent algorithm is presented for its evaluation. The theory is applied to Gaussian sources with squared-error distortion measure. Using the algorithm for the calculation of the new rate-distortion function in this case establishes the existence of codes which attain almost any desired rate between the rate-distortion bound and the optimum entropy-coded quantizer. Furthermore, one can closely approach the rate-distortion limit with a surprisingly small number of output levels. The calculation furnishes optimal output levels, output level probabilities, and other parameters necessary for a trellis coding simulation. The search algorithm represents the first use for asymmetric sources and distortion measures of a variation of a single stack algorithm proposed by Gallager. Carrying out the simulation at a rate of 1 bit per source symbol, codes are found with 4 and 64 output levels which attain distortions smaller than that of an optimum quantizer and close to the rate-distortion bound. Furthermore, these codes attain comparable or better performance with far less search effort than previous attempts with a continuous output alphabet.  相似文献   

18.
Two results on the coding of stationary nonergodic sources are presented. The first is a source coding theorem stating that there exist variable-rate codes with performance arbitrarily close to the rate-distortion function of the stationary nonergodic source. The second is a converse information transmission theorem. It is shown that the distortion which results when the source is transmitted across a channel with capacityCis no less than the least distortion achievable by fixed-rate codes with rateC.  相似文献   

19.
Lossy source coding   总被引:1,自引:0,他引:1  
Lossy coding of speech, high-quality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called rate-distortion theory. For the first 25 years of its existence, rate-distortion theory had relatively little impact on the methods and systems actually used to compress real sources. Today, however, rate-distortion theoretic concepts are an important component of many lossy compression techniques and standards. We chronicle the development of rate-distortion theory and provide an overview of its influence on the practice of lossy source coding  相似文献   

20.
A random number generator generates fair coin flips by processing deterministically an arbitrary source of nonideal randomness. An optimal random number generator generates asymptotically fair coin flips from a stationary ergodic source at a rate of bits per source symbol equal to the entropy rate of the source. Since optimal noiseless data compression codes produce incompressible outputs, it is natural to investigate their capabilities as optimal random number generators. We show under general conditions that optimal variable-length source codes asymptotically achieve optimal variable-length random bit generation in a rather strong sense. In particular, we show in what sense the Lempel-Ziv (1978) algorithm can be considered an optimal universal random bit generator from arbitrary stationary ergodic random sources with unknown distributions  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号