首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For stationary discrete-time Gaussian sources and the squared-error distortion measure, a trellis source code is constructed. The encoder consists of a Karhunen-Loeve transform on the source output followed by a search on a trellis structured code, where the decoder is a time-variant nonlinear filter. The corresponding code theorem is proved using the random coding argument. The proof technique follows that of Viterbi and Omura, who proved the trellis coding theorem for memoryless sources. The resultant coding scheme is implementable and applicable at any nonzero rate to a stationary Gaussian source with a bounded and continuous power spectrum. Therefore. for stationary sources, it is more general than Berger's tree coding scheme, which is restricted to autoregressive Gaussian sources in a region of high rate (low distortion).  相似文献   

2.
The generalized Lloyd algorithm is applied to the design of joint source and channel trellis waveform coders to encode discrete-time continuous-amplitude stationary and ergodic sources operating over discrete memoryless noisy channels. Experimental results are provided for independent and autoregressive Gaussian sources, binary symmetric channels, and absolute error and squared error distortion measures. Performance of the joint codes is compared with the tandem combination of a trellis source code and a trellis channel code on the independent Gaussian source using the squared error distortion measure operating over an additive white Gaussian noise channel. It is observed that the jointly optimized codes achieve performance close to or better than that of separately optimized tandem codes of the same constraint length. Performance improvement via a predictive joint source and channel trellis code is demonstrated for the autoregressive Gaussian source using the squared error distortion measure.  相似文献   

3.
4.
We consider a quadratic Gaussian distributed lossy source coding setup with an additional constraint of identical reconstructions between the encoder and the decoder. The setup consists of two correlated Gaussian sources, wherein one of them has to be reconstructed to be within some distortion constraint and match with a corresponding reconstruction at the encoder, while the other source acts as coded side information. We study the trade-off between the rates of two encoders for a given distortion constraint on the reconstruction. An explicit characterization of this trade-off is the main result of the paper. We also give close inner and outer bounds for the discrete memoryless version of the problem.  相似文献   

5.
The rate distortion functionR(D)is calculated for two time-discrete autoregressive sources--the time-discrete Gaussian autoregressive source with a mean-square-error fidelity criterion and the binary-symmetric first-order Markov source with an average probability-of-error per bit fidelity criterion. In both cases it is shown thatR(D)is bounded below by the rate distortion function of the independent-letter identically distributed sequence that generates the autoregressive source. This lower bound is shown to hold with equality for a nonzero region of small average distortion. The positive coding theorem is proved for the possibly nonstationary Gaussian autoregressive source with a constraint on the parameters. Finally, it is shown that the rate distortion function of any time-discrete autoregressive source with a difference distortion measure can be bounded below by the rate distortion function of the independent-letter identically distributed generating sequence with the same distortion measure.  相似文献   

6.
As the transmission rateRgets large, differential pulse-code modulation (PCM) when followed by entropy coding forms a source encoding system which performs within 1.53 dB of Shannon's rate distortion function which bounds the performance of any encoding system with a minimum mean-square error (mmse) fidelity criterion. This is true for any ergodic signal source. Furthermore, this source encoder introduces the same amount of uncertainty as the mmse encoder. The 1.53 dB difference between this encoder and the mmse encoder is perceptually so small that it would probably not be noticed by a human user of a high quality (signal-to-noise ratio(S/N) geq 30dB) speech or television source encoding system.  相似文献   

7.
An alternative to extrinsic information transfer (EXIT) charts called mean-square error (MSE) charts that use a measure related to the MSE instead of mutual information is proposed. Using the relationship between mutual information and minimum mean-square error (MMSE) for the additive white Gaussian noise (AWGN) channel, a relationship between the rate of any code and the area under a plot of MMSE versus signal-to-noise ratio (SNR) is obtained, when the a priori log-likelihood ratio (LLR) is from a binary input Gaussian channel. Using this result, a justification is provided for designing concatenated codes by matching the EXIT curves of the inner and outer decoder, when the LLRs are assumed to be Gaussian which is also the typical assumption used for code design using EXIT charts. Even though the Gaussian assumption is almost never true, the results presented in this paper represent a step toward the analysis of iterative decoding schemes using a single parameter. Finally, for the special case of AWGN channel it is shown that any capacity-achieving code has an EXIT curve that is a step function  相似文献   

8.
The quantization ofn-dimensional vectors inR^{n}with an arbitrary probability measure, under a mean-square error constraint, is discussed. It is demonstrated that a uniform, one-dimensional quantizer followed by a noiseless digital variable-rate encoder ("entropy encoding") can yield a rate that is, for anyn, no more than0.754bit-per-sample higher than the rate associated with the optimaln-dimensionai quantizer, regardless of the probabilistic characterization of the inputn-vector for the allowable mean-square error.  相似文献   

9.
The problem is considered of encoding a discrete memoryless source when correlated side information may or may not be available to the decoder. It is assumed that the side information is not available to the encoder. The rate-distortion functionR (D_{l}, D_{2})is determined whereD_{1}is the distortion achieved with side information andD_{2}is the distortion achieved without it. A generalization is made to the case ofmdecoders, each of which is privy to its own side information. An appropriately definedD-admissible rate for this general case is shown to equalR(D)when the side information sources satisfy a specified degradedness condition. Explicit results are obtained in the quadratic Gaussian case and in the binary Hamming case.  相似文献   

10.
The optimal data compression problem is posed in terms of an alphabet constraint rather than an entropy constraint. Solving the optimal alphabet-constrained data compression problem yields explicit source encoder/decoder designs, which is in sharp contrast to other approaches. The alphabet-constrained approach is shown to have the additional advantages that (1) classical waveform encoding schemes, such as pulse code modulation (PCM), differential pulse code modulation (DPCM), and delta modulation (DM), as well as rate distortion theory motivated tree/trellis coders fit within this theory; (2) the concept of preposterior analysis in data compression is introduced, yielding a rich. new class of coders: and (3) it provides a conceptual framework for the design of joint source/channel coders for noisy channel applications. Examples are presented of single-path differential encoding, delayed (or tree) encoding, preposterior analysis, and source coding over noisy channels.  相似文献   

11.
The encoding of independent data symbols as a sequence of discrete amplitude, real variables with given power spectrum is considered. The maximum rate of such an encoding is determined by the achievable entropy of the discrete sequence with the given constraints. An upper bound to this entropy is expressed in terms of the rate distortion function for a memoryless finite alphabet source and mean-square error distortion measure. A class of simple dc-free power spectra is considered in detail, and a method for constructing Markov sources with such spectra is derived. It is found that these sequences have greater entropies than most codes with similar spectra that have been suggested earlier, and that they often come close to the upper bound. When the constraint on the power spectrum is replaced by a constraint On the variance of the sum of the encoded symbols, a stronger upper bound to the rate of dc-free codes is obtained. Finally, the optimality of the binary biphase code and of the ternary bipolar code is decided.  相似文献   

12.
Properties of optimal entropy-constrained vector quantizers (ECVQs) are studied for the squared-error distortion measure. It is known that restricting an ECVQ to have convex codecells may preclude its optimality for some sources with discrete distribution. We show that for sources with continuous distribution, any finite-level ECVQ can be replaced by another finite-level ECVQ with convex codecells that has equal or better performance. We generalize this result to infinite-level quantizers, and also consider the problem of existence of optimal ECVQs for continuous source distributions. In particular, we show that given any entropy constraint, there exists an ECVQ with (possibly infinitely many) convex codecells that has minimum distortion among all ECVQs satisfying the constraint. These results extend analogous statements in entropy-constrained scalar quantization. They also generalize results in entropy-constrained vector quantization that were obtained via the Lagrangian formulation and, therefore, are valid only for certain values of the entropy constraint.  相似文献   

13.
On the structure of optimal entropy-constrained scalar quantizers   总被引:1,自引:0,他引:1  
The nearest neighbor condition implies that when searching for a mean-square optimal fixed-rate quantizer it is enough to consider the class of regular quantizers, i.e., quantizers having convex cells and codepoints which lie inside the associated cells. In contrast, quantizer regularity can preclude optimality in entropy-constrained quantization. This can be seen by exhibiting a simple discrete scalar source for which the mean-square optimal entropy-constrained scalar quantizer (ECSQ) has disconnected (and hence nonconvex) cells at certain rates. In this work, new results concerning the structure and existence of optimal ECSQs are presented. One main result shows that for continuous sources and distortion measures of the form d(x,y)=ρ(|x-y|), where ρ is a nondecreasing convex function, any finite-level ECSQ can be "regularized" so that the resulting regular quantizer has the same entropy and equal or less distortion. Regarding the existence of optimal ECSQs, we prove that under rather general conditions there exists an "almost regular" optimal ECSQ for any entropy constraint. For the squared error distortion measure and sources with piecewise-monotone and continuous densities, the existence of a regular optimal ECSQ is shown  相似文献   

14.
Fundamental limits on the source coding exponents (or large deviations performance) of zero-delay finite-memory (ZDFM) lossy source codes are studied. Our main results are the following. For any memoryless source, a suitably designed encoder that time-shares (at most two) memoryless scalar quantizers is as good as any time-varying fixed-rate ZDFM code, in that it can achieve the fastest exponential rate of decay for the probability of excess distortion. A dual result is shown to apply to the probability of excess code length, among all fixed-distortion ZDFM codes with variable rate. Finally, it is shown that if the scope is broadened to ZDFM codes with variable rate and variable distortion, then a time-invariant entropy-coded memoryless quantizer (without time sharing) is asymptotically optimal under a "fixed-slope" large-deviations criterion (introduced and motivated here in detail) corresponding to a linear combination of the code length and the distortion. These results also lead to single-letter characterizations for the source coding error exponents of ZDFM codes.  相似文献   

15.
G.D. Forney (1970, 1975) defined a minimal encoder as a polynomial matrix G such that G generates the code and G has the least constraint length among all generators for the code. Any convolutional code can be generated by a minimal encoder. High-rate k(k+1) punctured convolutional codes were introduced to simplify Viterbi decoding. An ordinary convolutional encoder G can be obtained from any punctured encoder. A punctured encoder is minimal if the corresponding ordinary encoder G is minimal and the punctured and ordinary encoders have the same constraint length. It is shown that any rate k/(k+1), noncatastrophic, antipodal punctured encoder is a minimal encoder.<>  相似文献   

16.
For memoryless discrete-time sources and bounded single-letter distortion measures, we derive a bound on the average per-letter distortion achievable by a trellis source code of fixed constraint length. For any fixed code rate greater thanR(D^{ast}), the rate-distortion function atD^{ast}, this bound decreases towardD^{ast}exponentially with constraint length.  相似文献   

17.
Loose composite constraint codes and their application in DVD   总被引:1,自引:0,他引:1  
Constrained coding is used in recording systems to translate an arbitrary sequence of input data to a channel sequence with special properties required by the physics of the medium. Very often, more than one constraint is imposed on a recorded sequence; typically, a run-length constraint is combined with a spectral-null constraint. We introduce a low-complexity encoder structure for composite constraints, based on loose multimode codes. The first channel constraint is imposed strictly, and the second constraint is imposed in a probabilistic fashion. Relaxing the second constraint is beneficial because it enables higher code rates and simplifies the encoder. To control the second constraint a multimode encoder is used. We represent a set of multimode coded sequences by a weighted trellis and propose using a limited trellis search to select optimal output. Using this method, we modify the EFM+ code used in digital versatile disk (DVD). We combine EFM+'s run-length constraint with the first- and second-order spectral-null constraints. The resulting EFM++ code gives more than 10-dB improvement in suppression of low-frequency spectral content in the servo bandwidth over the original EFM+ code with the same complexity  相似文献   

18.
Coding isotropic images   总被引:1,自引:0,他引:1  
Rate-distortion functions for 2-dimensional homogeneous isotropic images are compared with the performance of five source encoders designed for such images. Both unweighted and frequency weighted mean-square error distortion measures are considered. The coders considered are a) differential pulse code modulation (DPCM) using six previous samples or picture elements (pels) in the prediction--herein called 6-pel DPCM, b) simple DPCM using single-sample prediction, c) 6-pel DPCM followed by entropy coding, d)8 times 8discrete cosine transform coding, and e)4 times 4Hadamard transform coding. Other transform coders were studied and found to have about the same performance as the two transform coders above. With the mean-square error distortion measure, 6-pel DPCM with entropy coding performed best. Next best was the8 times 8discrete cosine transform coder and the 6-pel DPCM--these two had approximately the same distortion. Next were the4 times 4Hadamard and simple DPCM, in that order. The relative performance of the coders changed slightly when the distortion measure was frequency weighted mean-square error. FromR = 1to 3 bits/pel, which was the range studied here, the performances of all the coders were separated by only about 4 dB.  相似文献   

19.
For a fixed total bandwidth expansion factor, we consider the problem of optimal bandwidth allocation among the source coder, the channel coder, and the spread-spectrum unit for a direct-sequence code-division multiple-access system operating over a frequency-selective fading channel with narrowband interference. Assuming a Gaussian source with the optimum scalar quantizer, and a binary convolutional code with soft-decision decoding, and further assuming that the self-interference is negligible, we obtain both a lower and an upper bound on the end-to-end average source distortion. The joint three-way constrained optimization of the source code rate, the channel code rate, and the spreading factor can be simplified into an unconstrained optimization problem over two variables. Upon fixing the channel code rate, we show that both upper and lower bound-based distortion functions are convex functions of the source code rate. Because an explicit solution for the optimum source code rate, i.e., one that minimizes the average distortion, is difficult to obtain, computer-based search techniques are employed. Numerical results are presented for the optimum source code rate and spreading factor, parameterized by the channel code rate and code constraint length. The optimal bandwidth allocation, in general, depends on the system and the channel conditions, such as the total number of active users, the average jammer-to-signal power ratio, and the number of resolved multipath components together with their power delay profile.  相似文献   

20.
For a stationary ergodic source, the source coding theorem and its converse imply that the optimal performance theoretically achievable by a fixed-rate or variable-rate block quantizer is equal to the distortion-rate function, which is defined as the infimum of an expected distortion subject to a mutual information constraint. For a stationary nonergodic source, however, the. Distortion-rate function cannot in general be achieved arbitrarily closely by a fixed-rate block code. We show, though, that for any stationary nonergodic source with a Polish alphabet, the distortion-rate function can be achieved arbitrarily closely by a variable-rate block code. We also show that the distortion-rate function of a stationary nonergodic source has a decomposition as the average of the distortion-rate functions of the source's stationary ergodic components, where the average is taken over points on the component distortion-rate functions having the same slope. These results extend previously known results for finite alphabets  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号