首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering an M-resolution code, the rate loss at the ith resolution with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), where R/sub i/ is the rate achievable by the MRSC at stage i. This rate loss describes the performance degradation of the MRSC compared to the best single-resolution code with the same distortion. For two-resolution source codes, there are three scenarios of particular interest: (i) when both resolutions are equally important; (ii) when the rate loss at the first resolution is 0 (L/sub 1/=0); (iii) when the rate loss at the second resolution is 0 (L/sub 2/=0). The work of Lastras and Berger (see ibid., vol.47, p.918-26, Mar. 2001) gives constant upper bounds for the rate loss of an arbitrary memoryless source in scenarios (i) and (ii) and an asymptotic bound for scenario (iii) as D/sub 2/ approaches 0. We focus on the squared error distortion measure and (a) prove that for scenario (iii) L/sub 1/<1.1610 for all D/sub 2/相似文献   

2.
We explore joint source-channel coding (JSCC) for time-varying channels using a multiresolution framework for both source coding and transmission via novel multiresolution modulation constellations. We consider the problem of still image transmission over time-varying channels with the channel state information (CSI) available at (1) receiver only and (2) both transmitter and receiver being informed about the state of the channel, and we quantify the effect of CSI availability on the performance. Our source model is based on the wavelet image decomposition, which generates a collection of subbands modeled by the family of generalized Gaussian distributions. We describe an algorithm that jointly optimizes the design of the multiresolution source codebook, the multiresolution constellation, and the decoding strategy of optimally matching the source resolution and signal constellation resolution “trees” in accordance with the time-varying channel and show how this leads to improved performance over existing methods. The real-time operation needs only table lookups. Our results based on a wavelet image representation show that our multiresolution-based optimized system attains gains on the order of 2 dB in the reconstructed image quality over single-resolution systems using channel optimized source coding  相似文献   

3.
4.
Fixed rate universal block source coding with a fidelity criterion is considered for classes of composite sources with a finite (fixed) set of modes not an unknown switch process. In particular, it is shown that weakly minimax universal codes of all rates with respect to an arbitrary distortion measure exist for such processes.  相似文献   

5.
A class of distortionless codes designed by Bayes decision theory   总被引:1,自引:0,他引:1  
The problem of distortionless encoding when the parameters of the probabilistic model of a source are unknown is considered from a statistical decision theory point of view. A class of predictive and nonpredictive codes is proposed that are optimal within this framework. Specifically, it is shown that the codeword length of the proposed predictive code coincides with that of the proposed nonpredictive code for any source sequence. A bound for the redundancy for universal coding is given in terms of the supremum of the Bayes risk. If this supremum exists, then there exists a minimax code whose mean code length approaches it in the proposed class of codes, and the minimax code is given by the Bayes solution relative to the prior distribution of the source parameters that maximizes the Bayes risk  相似文献   

6.
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions  相似文献   

7.
The use of multiresolution (MR) joint source-channel coding in the context of digital terrestrial broadcasting of high-definition television (HDTV) is shown to be an efficient alternative to single-resolution techniques, which suffer from a sharp threshold effect in the fringes of the broadcast area. It is shown how matched multiresolution source and channel coding can provide a stepwise graceful degradation and improve the behavior, in terms of coverage and robustness of the transmission scheme, over systems not specifically designed for broadcast situations. The alternative available for multiresolution transmission through embedded modulation and error correction codes are examined. It is also shown how multiresolution trellis-coded modulation (TCM) can be used to increase coverage range. Coding results and simulations of noisy transmission are presented, and tradeoffs are discussed  相似文献   

8.
Although the existence of universal noiseless variable-rate codes for the class of discrete stationary ergodic sources has previously been established, very few practical universal encoding methods are available. Efficient implementable universal source coding techniques are discussed in this paper. Results are presented on source codes for which a small value of the maximum redundancy is achieved with a relatively short block length. A constructive proof of the existence of universal noiseless codes for discrete stationary sources is first presented. The proof is shown to provide a method for obtaining efficient universal noiseless variable-rate codes for various classes of sources. For memoryless sources, upper and lower bounds are obtained for the minimax redundancy as a function of the block length of the code. Several techniques for constructing universal noiseless source codes for memoryless sources are presented and their redundancies are compared with the bounds. Consideration is given to possible applications to data compression for certain nonstationary sources.  相似文献   

9.
10.
Weighted universal image compression   总被引:1,自引:0,他引:1  
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB.  相似文献   

11.
12.
In a causal source coding system, the reconstruction of the present source sample is restricted to be a function of the present and past source samples, while the code stream itself may be noncausal and have variable rate. Neuhoff and Gilbert showed that for memoryless sources, optimum performance among all causal source codes is achieved by time-sharing at most two memoryless codes (quantizers) followed by entropy coding. In this work, we extend Neuhoff and Gilbert's result in the limit of small distortion (high resolution) to two new settings. First, we show that at high resolution, an optimal causal code for a stationary source with finite differential entropy rate consists of a uniform quantizer followed by a (sequence) entropy coder. This implies that the price of causality at high resolution is approximately 0.254 bit, i.e., the space-filling loss of the uniform quantizer. Then, we consider individual sequences and introduce a deterministic analogue of differential entropy, which we call "Lempel-Ziv differential entropy." We show that for any bounded individual sequence with finite Lempel-Ziv differential entropy, optimum high-resolution performance among all finite-memory variable-rate causal codes is achieved by dithered scalar uniform quantization followed by Lempel-Ziv coding. As a by-product, we also prove an individual-sequence version of the Shannon lower bound.  相似文献   

13.
The authors consider the encoding of image subbands with a tree code that is asymptotically optimal for Gaussian sources and the mean squared error (MSE) distortion measure. They first prove that optimal encoding of ideally filtered subbands of a Gaussian image source achieves the rate distortion bound for the MSE distortion measure. The optimal rate and distortion allocation among the subbands is a by-product of this proof. A bound is derived which shows that subband coding is closer than full-band coding to the rate distortion bound for a finite length sequence. The tree codes are then applied to encode the image subbands, both nonadaptively and adaptively. Since the tree codes are stochastic and the search of the code tree is selective, a relatively few reproduction symbols may have an associated squared error a hundred times larger than the target for the subband. Correcting these symbols through a postcoding procedure improves the signal-to-noise ratio and visual quality significantly, with a marginal increase in total rate.  相似文献   

14.
Variable-rate universal source codes are data compression schemes that are optimum for the coding of a collection of sources (e.g., a source with unknown parameters) subject to a fixed average distortion constraint. Existence of variable-rate universal source codes is demonstrated for very general classes of sources and distortion measures.  相似文献   

15.
Universal lossless source coding with the Burrows Wheeler transform   总被引:6,自引:0,他引:6  
The Burrows Wheeler transform (1994) is a reversible sequence transformation used in a variety of practical lossless source-coding algorithms. In each, the BWT is followed by a lossless source code that attempts to exploit the natural ordering of the BWT coefficients. BWT-based compression schemes are widely touted as low-complexity algorithms giving lossless coding rates better than those of the Ziv-Lempel codes (commonly known as LZ'77 and LZ'78) and almost as good as those achieved by prediction by partial matching (PPM) algorithms. To date, the coding performance claims have been made primarily on the basis of experimental results. This work gives a theoretical evaluation of BWT-based coding. The main results of this theoretical evaluation include: (1) statistical characterizations of the BWT output on both finite strings and sequences of length n → ∞, (2) a variety of very simple new techniques for BWT-based lossless source coding, and (3) proofs of the universality and bounds on the rates of convergence of both new and existing BWT-based codes for finite-memory and stationary ergodic sources. The end result is a theoretical justification and validation of the experimentally derived conclusions: BWT-based lossless source codes achieve universal lossless coding performance that converges to the optimal coding performance more quickly than the rate of convergence observed in Ziv-Lempel style codes and, for some BWT-based codes, within a constant factor of the optimal rate of convergence for finite-memory sources  相似文献   

16.
Average case universal compression of independent and identically distributed (i.i.d.) sources is investigated, where the source alphabet is large, and may be sublinear in size or even larger than the compressed data sequence length n. In particular, the well-known results, including Rissanen's strongest sense lower bound, for fixed-size alphabets are extended to the case where the alphabet size k is allowed to grow with n. It is shown that as long as k=o(n), instead of the coding cost in the fixed-size alphabet case of 0.5logn extra code bits for each one of the k-1 unknown probability parameters, the cost is now 0.5log(n/k) code bits for each unknown parameter. This result is shown to be the lower bound in the minimax and maximin senses, as well as for almost every source in the class. Achievability of this bound is demonstrated with two-part codes based on quantization of the maximum-likelihood (ML) probability parameters, as well as by using the well-known Krichevsky-Trofimov (KT) low-complexity sequential probability estimates. For very large alphabets, kGtn, it is shown that an average minimax and maximin bound on the redundancy is essentially (to first order) log(k/n) bits per symbol. This bound is shown to be achievable both with two-part codes and with a sequential modification of the KT estimates. For k=Theta(n), the redundancy is Theta(1) bits per symbol. Finally, sequential codes are designed for coding sequences in which only m相似文献   

17.
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.  相似文献   

18.
The minimum probability of error achievable by random codes on the arbitrarily varying channel (AVC) is investigated. New exponential error bounds are found and applied to the AVC with and without input and state constraints. Also considered is a simple subclass of random codes, called randomly modulated codes, in which encoding and decoding operations are separate from code randomization. A universal coding theorem is proved which shows the existence of randomly modulated codes that achieve the same error bounds as “fully” random codes for all AVCs  相似文献   

19.
This paper is concerned with two problems in the theory of source coding subject to a maximum average distortion constraint. The first problem involves the coding of a nonergodic discrete time source and the second involves coding for a class of ergodic discrete time sources. Coding theorems are given for both of these situations for very general source alphabets. The codes that are obtained here will, in general, be variable length codes so that average code rate and average distortion are the measures of performance.  相似文献   

20.
Fundamental limits on the source coding exponents (or large deviations performance) of zero-delay finite-memory (ZDFM) lossy source codes are studied. Our main results are the following. For any memoryless source, a suitably designed encoder that time-shares (at most two) memoryless scalar quantizers is as good as any time-varying fixed-rate ZDFM code, in that it can achieve the fastest exponential rate of decay for the probability of excess distortion. A dual result is shown to apply to the probability of excess code length, among all fixed-distortion ZDFM codes with variable rate. Finally, it is shown that if the scope is broadened to ZDFM codes with variable rate and variable distortion, then a time-invariant entropy-coded memoryless quantizer (without time sharing) is asymptotically optimal under a "fixed-slope" large-deviations criterion (introduced and motivated here in detail) corresponding to a linear combination of the code length and the distortion. These results also lead to single-letter characterizations for the source coding error exponents of ZDFM codes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号