首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study the effect of the introduction of side information into the causal source coding setting of Neuhoff and Gilbert. We find that the spirit of their result, namely, the sufficiency of time-sharing scalar quantizers (followed by appropriate lossless coding) for attaining optimum performance within the family of causal source codes, extends to many scenarios involving availability of side information (at both encoder and decoder, or only on one side). For example, in the case where side information is available at both encoder and decoder, we find that time-sharing side-information-dependent scalar quantizers (at most two for each side-information symbol) attains optimum performance. This remains true even when the reproduction sequence is allowed noncausal dependence on the side information and even for the case where the source and the side information, rather than consisting of independent and identically distributed (i.i.d.) pairs, form, respectively, the output of a memoryless channel and its stationary ergodic input.  相似文献   

2.
It is well-known that variable-length coding schemes can be employed in entropy encoding of finite-alphabet sources. To transmit these codes over a synchronous channel, however, requires a buffer. Since in practice this buffer is of finite size, it is subject to both overflow and undertow. The buffer behavior is studied with particular application to Huffman coding of the outputs of an optimum uniform-threshold quantizer driven by a memoryless Gaussian source. Fairly general upper and lower bounds on the average terminal time are developed. Under certain conditions, the tightness of these bounds is verified, and asymptotic formulas are developed. As an example, an encoding scheme employing Huffman codes in conjunction with uniform quantization of memoryless Gaussian sources is considered, and the buffer behavior as a function of the buffer size and output rate is studied.  相似文献   

3.
Causal source codes are defined. These include quantizers, delta modulators, differential pulse code modulators, and adaptive versions of these. Several types of causal codes are identified. For memoryless sources it is shown that the optimum performance attainable by causal codes can be achieved by memoryless codes or by time-sharing memoryless codes. This optimal performance can be evaluated straightforwardly.  相似文献   

4.
A universal variable-to-fixed length algorithm for binary memoryless sources which converges to the entropy of the source at the optimal rate is known. We study the problem of universal variable-to-fixed length coding for the class of Markov sources with finite alphabets. We give an upper bound on the performance of the code for large dictionary sizes and show that the code is optimal in the sense that no codes exist that have better asymptotic performance. The optimal redundancy is shown to be H log log M/log M where H is the entropy rate of the source and M is the code size. This result is analogous to Rissanen's (1984) result for fixed-to-variable length codes. We investigate the performance of a variable-to-fixed coding method which does not need to store the dictionaries, either at the coder or the decoder. We also consider the performance of both these source codes on individual sequences. For individual sequences we bound the performance in terms of the best code length achievable by a class of coders. All the codes that we consider are prefix-free and complete  相似文献   

5.
A source of random message bits is to be embedded into a covertext modeled as a discrete memoryless source (DMS), resulting in a stegotext from which the embedded bits should be recoverable. A causal code for such a scenario consists of an encoder that generates the stegotext as a causal function of the message bits and the covertext, and a decoder that reproduces the message bits as a causal function of the stegotext. A semicausal code, on the other hand, has an encoder that is causal only with respect to the covertext, and not necessarily with respect to the message, and has a possibly noncausal decoder. We analyze the possible tradeoffs among: a) the distortion between the stegotext and the covertext, b) the compressibility of the stegotext, and c) the rate at which random bits are embedded, that are achievable with causal and semicausal codes, with and without attacks on the stegotext. We also study causal and semicausal codes for the private version of the above scenario in which the decoder has access to the covertext. Connections are made with the causal rate-distortion function of Neuhoff and Gilbert, as well as the problem of channel coding with causal side information at the transmitter analyzed by Shannon.  相似文献   

6.
Fundamental limits on the source coding exponents (or large deviations performance) of zero-delay finite-memory (ZDFM) lossy source codes are studied. Our main results are the following. For any memoryless source, a suitably designed encoder that time-shares (at most two) memoryless scalar quantizers is as good as any time-varying fixed-rate ZDFM code, in that it can achieve the fastest exponential rate of decay for the probability of excess distortion. A dual result is shown to apply to the probability of excess code length, among all fixed-distortion ZDFM codes with variable rate. Finally, it is shown that if the scope is broadened to ZDFM codes with variable rate and variable distortion, then a time-invariant entropy-coded memoryless quantizer (without time sharing) is asymptotically optimal under a "fixed-slope" large-deviations criterion (introduced and motivated here in detail) corresponding to a linear combination of the code length and the distortion. These results also lead to single-letter characterizations for the source coding error exponents of ZDFM codes.  相似文献   

7.
Bucklew's (1984) high-rate vector quantizer mismatch result is extended from fixed-rate coding to variable-rate coding using a Lagrangian formulation. It is shown that if an asymptotically (high-rate) optimal sequence of variable rate codes is designed for a k-dimensional probability density function (PDF) g and then applied to another PDF f for which f/g is bounded, then the resulting mismatch or loss of performance from the optimal possible is given by the relative entropy or Kullback-Leibler (1968) divergence I(f/spl par/g). It is also shown that under the same assumptions, an asymptotically optimal code sequence for g can be converted to an asymptotically optimal code sequence for a mismatched source f by modifying only the lossless component of the code. Applications to quantizer design using uniform and Gaussian densities are described, including a high-rate analog to the Shannon rate-distortion result of Sakrison (1975) and Lapidoth (1997) showing that the Gaussian is the "worst case" for lossy compression of a source with known covariance. By coupling the mismatch result with composite quantizers, the worst case properties of uniform and Gaussian densities are extended to conditionally uniform and Gaussian densities, which provides a Lloyd clustering algorithm for fitting mixtures to general densities.  相似文献   

8.
Permutation codes are vector quantizers whose codewords are related by permutations and, in one variant, sign changes. Asymptotically, as the vector dimension grows, optimal Variant I permutation code design is identical to optimal entropy-constrained scalar quantizer (ECSQ) design. However, contradicting intuition and previously published assertions, there are finite block length permutation codes that perform better than the best ones with asymptotically large length; thus, there are Variant I permutation codes whose performances cannot be matched by any ECSQ. Along similar lines, a new asymptotic relation between Variant I and Variant II permutation codes is established but again demonstrated to not necessarily predict the performances of short codes. Simple expressions for permutation code performance are found for memoryless uniform and Laplacian sources. The uniform source yields the aforementioned counterexamples  相似文献   

9.
In this paper, the effects of quantization noise feedback on the entropy of Laplacian pyramids are investigated. This technique makes it possible for the maximum absolute reconstruction error to be easily and strongly upper-bounded (near-lossless coding), and therefore, allows reversible compression. The entropy-minimizing optimum quantizer is obtained by modeling the first-order distributions of the differential signals as Laplacian densities, and by deriving a model for the equivalent memoryless entropy. A novel approach, based on an enhanced Laplacian pyramid, is proposed for the compression, either lossless or lossy, of gray-scale images. Major details are prioritized through a content-driven decision rule embedded in a uniform threshold quantizer with noise feedback. Lossless coding shows improvements over reversible Joint Photographers Expert Group (JPEG) and the reduced-difference pyramid schemes, while lossy coding outperforms JPEG, with a significant peak signal-to-noise ratio (PSNR) gain. Also, subjective quality is higher even at very low bit rates, due to the absence of the annoying impairments typical of JPEG. Moreover, image versions having resolution and SNR that are both progressively increasing are made available at the receiving end from the earliest retrieval stage on, as intermediate steps of the decoding procedure, without any additional cost.  相似文献   

10.
Variable-length codes can be used in entropy coding the outputs of an optimum entropy-constrained quantizer. Transmitting these codes over a synchronous channel; however, requires a buffer connecting the entropy coder to the channel. In a practical application, this buffer is of finite size and hence might overflow or undertow. To alleviate this difficulty, we use an adaptive scheme in which the quantizer parameters are changed successively according to the state of the buffer. Rate-distortion performance of optimum entropy-constrained quantizers in conjunction with this adaptive scheme is studied for the class of generalized Gaussian sources. It is demonstrated through simulations that the overflow/ undertow problem can be practically eliminated at the cost of a negligible increase in average distortion. Furthermore, it is shown that the efficiency of this system is more pronounced at high rates and for more broadtailed source densities. Easily computable upper and lower bounds on the average distortion of the adaptive system are developed.  相似文献   

11.
A pyramid vector quantizer   总被引:5,自引:0,他引:5  
The geometric properties of a memoryless Laplacian source are presented and used to establish a source coding theorem. Motivated by this geometric structure, a pyramid vector quantizer (PVQ) is developed for arbitrary vector dimension. The PVQ is based on the cubic lattice points that lie on the surface of anL-dimensional pyramid and has simple encoding and decoding algorithms. A product code version of the PVQ is developed and generalized to apply to a variety of sources. Analytical expressions are derived for the PVQ mean square error (mse), and simulation results are presented for PVQ encoding of several memoryless sources. For large rate and dimension, PVQ encoding of memoryless Laplacian, gamma, and Gaussian sources provides rose improvements of5.64, 8.40, and2.39dB, respectively, over the corresponding optimum scalar quantizer. Although suboptimum in a rate-distortion sense, because the PVQ can encode large-dimensional vectors, it offers significant reduction in rose distortion compared with the optimum Lloyd-Max scalar quantizer, and provides an attractive alternative to currently available vector quantizers.  相似文献   

12.
Petry's efficient and optimal variable to fixed-length source code for discrete memoryless sources was described by Schalkwijk. By extending this coding technique we are able to give an algorithm for Markov sources that is easy to implement. We can bound the loss of efficiency as a function of the code complexity and the mismatch between the source and the code. Rates arbitrarily close to the source entropy are shown to be achievable. In this sense the codes introduced are optimal.  相似文献   

13.
A rate-distortion theory is introduced for the optimal encoding of stationary memoryless continuous-amplitude sources with a single-letter distortion measure and reproduction alphabets of a given finite size. The theory arises from a judicious approximation of the original continuous-input discrete-output problem by one with discrete input and output. A size-constrained output alphabet rate-distortion function is defined, its coding significance is established by coding theorems, and a convergent algorithm is presented for its evaluation. The theory is applied to Gaussian sources with squared-error distortion measure. Using the algorithm for the calculation of the new rate-distortion function in this case establishes the existence of codes which attain almost any desired rate between the rate-distortion bound and the optimum entropy-coded quantizer. Furthermore, one can closely approach the rate-distortion limit with a surprisingly small number of output levels. The calculation furnishes optimal output levels, output level probabilities, and other parameters necessary for a trellis coding simulation. The search algorithm represents the first use for asymmetric sources and distortion measures of a variation of a single stack algorithm proposed by Gallager. Carrying out the simulation at a rate of 1 bit per source symbol, codes are found with 4 and 64 output levels which attain distortions smaller than that of an optimum quantizer and close to the rate-distortion bound. Furthermore, these codes attain comparable or better performance with far less search effort than previous attempts with a continuous output alphabet.  相似文献   

14.
Universal noiseless coding   总被引:2,自引:0,他引:2  
Universal coding is any asymptotically optimum method of block-to-block memoryless source coding for sources with unknown parameters. This paper considers noiseless coding for such sources, primarily in terms of variable-length coding, with performance measured as a function of the coding redundancy relative to the per-letter conditional source entropy given the unknown parameter. It is found that universal (i.e., zero redundancy) coding in a weighted sense is possible if and only if the per-letter average mutual information between the parameter space and the message space is zero. Universal coding is possible in a maximin sense if and only if the channel capacity between the two spaces is zero. Universal coding is possible in a minimax sense if and only if a probability mass function exists, independent of the unknown parameter, for which the relative entropy of the known conditional-probability mass-function is zero. Several examples are given to illustrate the ideas. Particular attention is given to sources that are stationary and ergodic for any fixed parameter although the whole ensemble is not. For such sources, weighted universal codes always exist if the alphabet is finite, or more generally if the entropy is finite. Minimax universal codes result if an additional entropy stability constraint is applied. A discussion of fixed-rate universal coding is also given briefly with performance measured by a probability of error.  相似文献   

15.
We introduce three soft-decision demodulation channel-optimized vector quantizers (COVQs) to transmit analog sources over space–time orthogonal block (STOB)-coded flat Rayleigh fading channels with binary phase-shift keying (BPSK) modulation. One main objective is to judiciously utilize the soft information of the STOB-coded channel in the design of the vector quantizers while keeping a low system complexity. To meet this objective, we introduce a simple space–time decoding structure that consists of a space–time soft detector, followed by a linear combiner and a scalar uniform quantizer with resolution$q$. The concatenation of the space–time encoder/modulator, fading channel, and space–time receiver can be described by a binary-input,$2^q$-output discrete memoryless channel (DMC). The scalar uniform quantizer is chosen so that the capacity of the equivalent DMC is maximized to fully exploit and capture the system's soft information by the DMC. We next determine the statistics of the DMC in closed form and use them to design three COVQ schemes with various degrees of knowledge of the channel noise power and fading coefficients at the transmitter and/or receiver. The performance of each quantization scheme is evaluated for memoryless Gaussian and Gauss–Markov sources and various STOB codes, and the benefits of each scheme is illustrated as a function of the antenna-diversity and soft-decision resolution$q$. Comparisons to traditional coding schemes, which perform separate source and channel coding operations, are also provided.  相似文献   

16.
Variable-length-to-block codes are a generalization of run-length codes. A coding theorem is first proved. When the codes are used to transmit information from fixed-rate sources through fixed-rate noiseless channels, buffer overflow results. The latter phenomenon is an important consideration in the retrieval of compressed data from storage. The probability of buffer overflow decreases exponentially with buffer length and we determine the relation between rate and exponent size for memoryless sources. We obtain codes that maximize the overflow exponent for any given transmission rate exceeding the source entropy and present asymptotically optimal coding algorithms whose complexity grows linearly with codeword length. It turns out that the optimum error exponents of variable-length-to-block coding are identical with those of block-to-variable-length coding and are related in an interesting way to Renyi's generalized entropy function.  相似文献   

17.
We examine the performance of the Karhunen-Loeve transform (KLT) for transform coding applications. The KLT has long been viewed as the best available block transform for a system that orthogonally transforms a vector source, scalar quantizes the components of the transformed vector using optimal bit allocation, and then inverse transforms the vector. This paper treats fixed-rate and variable-rate transform codes of non-Gaussian sources. The fixed-rate approach uses an optimal fixed-rate scalar quantizer to describe the transform coefficients; the variable-rate approach uses a uniform scalar quantizer followed by an optimal entropy code, and each quantized component is encoded separately. Earlier work shows that for the variable-rate case there exist sources on which the KLT is not unique and the optimal quantization and coding stage matched to a "worst" KLT yields performance as much as 1.5 dB worse than the optimal quantization and coding stage matched to a "best" KLT. In this paper, we strengthen that result to show that in both the fixed-rate and the variable-rate coding frameworks there exist sources for which the performance penalty for using a "worst" KLT can be made arbitrarily large. Further, we demonstrate in both frameworks that there exist sources for which even a best KLT gives suboptimal performance. Finally, we show that even for vector sources where the KLT yields independent coefficients, the KLT can be suboptimal for fixed-rate coding.  相似文献   

18.
We consider Zador's (1963, 1966, 1982) asymptotic formula for the distortion-rate function for a variable-rate vector quantizer in the high-rate case. This formula involves the differential entropy of the source, the rate of the quantizer in bits per sample, and a coefficient G which depends on the geometry of the quantizer but is independent of the source. We give an explicit formula for G in the case when the quantizing regions form a periodic tiling of n-dimensional space, in terms of the volumes and second moments of the Voronoi cells. As an application we show, extending earlier work of Kashyap and Neuhoff (see ibid, vol.47, p.2538-2383, 2001) that even a variable-rate three-dimensional quantizer based on the "A15" structure is still inferior to a quantizer based on the body-centered cubic lattice. We also determine the smallest covering radius of such a structure.  相似文献   

19.
This paper investigates the optimum entropy versus distortion performance of quantizers optimized for uniform, Gaussian, Laplacian, and gamma-distributed memoryless sources which are useful models of the quantizer input signals in speech or picture coding schemes. We list the maximally obtainable signal-to-quantization noise ratios for one-dimensional optimum (i.e. entropy-coded) quantizers in the important low bit-rate region. These results have been obtained by an iterative solution of a set of nonlinear equations. Additionally we have also computed the corresponding rate-distortion functions by employing the Blahut-algorithm. These latter results upperbound the performances of multi-dimensional quantization schemes, and a comparison with the former results indicates the penalty to be paid for restricting a coder to perform a one-dimensionai quantization. It will be shown that the differences can be significant in the low bit-rate region.  相似文献   

20.
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A "universal" generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given comparing the performance of noiseless universal syndrome-source-coding to 1) run-length coding and 2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号