首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
3.
4.
5.
This paper extends Bennett's (1948) integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of points, point density function, inertial profile, and the distribution of the source. The inertial profile specifies the normalized moment of inertia of quantization cells as a function of location. The extension is formulated in terms of a sequence of quantizers whose point density and inertial profile approach known functions as the number of points increase. Precise conditions are given for the convergence of distortion (suitably normalized) to Bennett's integral. Previous extensions did not include the inertial profile and, consequently, provided only bounds or applied only to quantizers with congruent cells, such as lattice and optimal quantizers. The new version of Bennett's integral provides a framework for the analysis of suboptimal structured vector quantizers. It is shown how the loss in performance of such quantizers, relative to optimal unstructured ones, can be decomposed into point density and cell shape losses. As examples, these losses are computed for product quantizers and used to gain further understanding of the performance of scalar quantizers applied to stationary, memoryless sources and of transform codes applied to Gaussian sources with memory. It is shown that the short-coming of such quantizers is that they must compromise between point density and cell shapes  相似文献   

6.
This paper presents the development and evaluation of fuzzy vector quantization algorithms. These algorithms are designed to achieve the quality of vector quantizers provided by sophisticated but computationally demanding approaches, while capturing the advantages of the frequently used in practice k-means algorithm, such as speed, simplicity, and conceptual appeal. The uncertainty typically associated with clustering tasks is formulated in this approach by allowing the assignment of each training vector to multiple clusters in the early stages of the iterative codebook design process. A training vector assignment strategy is also proposed for the transition from the fuzzy mode, where each training vector can be assigned to multiple clusters, to the crisp mode, where each training vector can be assigned to only one cluster. Such a strategy reduces the dependence of the resulting codebook on the random initial codebook selection. The resulting algorithms are used in image compression based on vector quantization. This application provides the basis for evaluating the computational efficiency of the proposed algorithms and comparing the quality of the resulting codebook design with that provided by competing techniques.  相似文献   

7.
The Hadamard transform-a tool for index assignment   总被引:2,自引:0,他引:2  
We show that the channel distortion for maximum-entropy encoders, due to noise on a binary-symmetric channel, is minimized if the vector quantizer can be expressed as a linear transform of a hypercube. The index assignment problem is regarded as a problem of linearizing the vector quantizer. We define classes of index assignments with related properties, within which the best index assignment is found by sorting, not searching. Two powerful algorithms for assigning indices to the codevectors of nonredundant coding systems are presented. One algorithm finds the optimal solution in terms of linearity, whereas the other finds a very good, but suboptimal, solution in a very short time  相似文献   

8.
We present a practical video coding algorithm for use at very low bit rates. For efficient coding at very low bit rates, it is important to intelligently allocate bits within a frame, and so a powerful variable-rate algorithm is required. We use vector quantization to encode the motion-compensated residue signal in an H.263-like framework. For a given complexity, it is well understood that structured vector quantizers perform better than unstructured and unconstrained vector quantizers. A combination of structured vector quantizers is used in our work to encode the video sequences. The proposed codec is a multistage residual vector quantizer, with transform vector quantizers in the initial stages. The transform-VQ captures the low-frequency information, using only a small portion of the bit budget, while the later stage residual VQ captures the high-frequency information, using the remaining bits. We used a strategy to adaptively refine only areas of high activity, using recursive decomposition and selective refinement in the later stages. An entropy constraint was used to modify the codebooks to allow better entropy coding of the indexes. We evaluate the performance of the proposed codec, and compare this data with the performance of the H.263-based codec. Experimental results show that the proposed codec delivered significantly better perceptual quality along with better quantitative performance  相似文献   

9.
In this paper, we propose a novel feedforward adaptive quantization scheme called the sample-adaptive product quantizer (SAPQ). This is a structurally constrained vector quantizer that uses unions of product codebooks. SAPQ is based on a concept of adaptive quantization to the varying samples of the source and is very different from traditional adaptation techniques for nonstationary sources. SAPQ quantizes each source sample using a sequence of quantizers. Even when using scalar quantization in SAPQ, we can achieve performance comparable to vector quantization (with the complexity still close to that of scalar quantization). We also show that important lattice-based vector quantizers can be constructed using scalar quantization in SAPQ. We mathematically analyze SAPQ and propose a algorithm to implement it. We numerically study SAPQ for independent and identically distributed Gaussian and Laplacian sources. Through our numerical study, we find that SAPQ using scalar quantizers achieves typical gains of 13 dB in distortion over the Lloyd-Max quantizer. We also show that SAPQ can he used in conjunction with vector quantizers to further improve the gains  相似文献   

10.
A vector quantizer maps ak-dimensional vector into one of a finite set of output vectors or "points". Although certain lattices have been shown to have desirable properties for vector quantization applications, there are as yet no algorithms available in the quantization literature for building quantizers based on these lattices. An algorithm for designing vector quantizers based on the root latticesA_{n}, D_{n}, andE_{n}and their duals is presented. Also, a coding scheme that has general applicability to all vector quantizers is presented. A four-dimensional uniform vector quantizer is used to encode Laplacian and gamma-distributed sources at entropy rates of one and two bits/sample and is demonstrated to achieve performance that compares favorably with the rate distortion bound and other scalar and vector quantizers. Finally, an application using uniform four- and eight-dimensional vector quantizers for encoding the discrete cosine transform coefficients of an image at0.5bit/pel is presented, which visibly illustrates the performance advantage of vector quantization over scalar quantization.  相似文献   

11.
基于塔式格型矢量量化的图像多描述编码算法   总被引:5,自引:0,他引:5  
多描述编码(MDC)是解决差错信道上图像通信数据包丢失问题的一种新方法,它通过将图像分解为多个独立而又具有一定相关性的描述,并通过不同的信道进行传输,来改善数据丢失条件下的图像解码质量。本文提出了一种图像信号的多描述塔式格型矢量量化编码算法(MDPLVQ),利用小波树之间的独立性,采用不同的塔式格型矢量量化缩放因子对小波系数进行量化。该算法设计简单,对冗余度的控制容易,实验结果说明了其有效性,其编码压缩性能优于多描述标量量化(MDSQ)、多描述对变换(MDPCT)和多描述零又树(MDEZW)等方法。  相似文献   

12.
An Algorithm for Vector Quantizer Design   总被引:22,自引:0,他引:22  
An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data. The basic properties of the algorithm are discussed and demonstrated by examples. Quite general distortion measures and long blocklengths are allowed, as exemplified by the design of parameter vector quantizers of ten-dimensional vectors arising in Linear Predictive Coded (LPC) speech compression with a complicated distortion measure arising in LPC analysis that does not depend only on the error vector.  相似文献   

13.
A method is presented for the joint source-channel coding optimization of a scheme based on the two-dimensional block cosine transform when the output of the encoder is to be transmitted via a memoryless binary symmetric channel. The authors' approach involves an iterative algorithm for the design of the quantizers (in the presence of channel errors) used for encoding the transform coefficients. This algorithm produces a set of locally optimum (in the mean-squared error sense) quantizers and the corresponding binary codeword assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, the authors have used an algorithm based on the steepest descent method, which, under certain convexity conditions on the performance of the channel-optimized quantizers, yields the optimal bit allocation. Simulation results for the performance of this locally optimum system over noisy channels have been obtained, and appropriate comparisons with a reference system designed for no channel errors have been made. It is shown that substantial performance improvements can be obtained by using this scheme. Furthermore, theoretically predicted results and rate distortion-theoretic bounds for an assumed two-dimensional image model are provided  相似文献   

14.
The performance of optimum vector quantizers subject to a conditional entropy constraint is studied. This new class of vector quantizers was originally suggested by Chou and Lookabaugh (1990). A locally optimal design of this kind of vector quantizer can be accomplished through a generalization of the well-known entropy-constrained vector quantizer (ECVQ) algorithm. This generalization of the ECVQ algorithm to a conditional entropy-constrained is called CECVQ, i.e., conditional ECVQ. Furthermore, we have extended the high-rate quantization theory to this new class of quantizers to obtain a new high-rate performance bound. The new performance bound is compared and shown to be consistent with bounds derived through conditional rate-distortion theory. A new algorithm for designing entropy-constrained vector quantizers was introduced by Garrido, Pearlman, and Finamore (see IEEE Trans. Circuits Syst. Video Technol., vol.5, no.2, p.83-95, 1995), and is named entropy-constrained pairwise nearest neighbor (ECPNN). The algorithm is basically an entropy-constrained version of the pairwise nearest neighbor (ECPNN) clustering algorithm of Equitz (1989). By a natural extension of the ECPNN algorithm we develop another algorithm, called CECPNN, that designs conditional entropy-constrained vector quantizers. Through simulation results on synthetic sources, we show that CECPNN and CECVQ have very close distortion-rate performance  相似文献   

15.
Finite-state vector quantization for waveform coding   总被引:3,自引:0,他引:3  
A finite-state vector quantizer is a finite-state machine used for data compression: Each successive source vector is encoded into a codeword using a minimum distortion rule, and into a code book, depending on the encoder state. The current state and the selected codeword then determine the next encoder state. A finite-state vector quantizer is capable of making better use of the memory in a source than is an ordinary memoryless vector quantizer of the same dimension or blocklength. Design techniques are introduced for finite-state vector quantizers that combine ad hoc algorithms with an algorithm for the design of memoryless vector quantizers. Finite-state vector quantizers are designed and simulated for Gauss-Markov sources and sampled speech data, and the resulting performance and storage requirements are compared with ordinary memoryless vector quantization.  相似文献   

16.
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions  相似文献   

17.
18.
This study focuses on two issues: parametric modeling of the channel and index assignment of codevectors, to design a vector quantizer that achieves high robustness against channel errors. We first formulate the design of a robust zero-redundancy vector quantizer as a combinatorial optimization problem leading to a genetic search for a minimum-distortion index assignment. The performance is further enhanced by the use of the Fritchman (1967) channel model that more closely characterizes the statistical dependencies between error sequences. This study also presents an index assignment algorithm based on the Fritchman model with parameter values estimated using a real-coded genetic algorithm. Simulation results indicate that the global explorative properties of genetic algorithms make them very effective in estimating Fritchman model parameters, and use of this model can match index assignment to expected channel conditions  相似文献   

19.
A new design procedure for shape-gain vector quantizers (SGVQs) which leads to substantially improved robustness against channel errors without increasing the computational complexity is proposed. This aim is achieved by including the channel transition probabilities in the design procedure, leading to an improved assignment of binary codewords to the coding regions as well as a change of partition and centroids. In contrast to conventional design, negative gain values are also permitted. The new design procedure is applied to adaptive transform image coding. Simulation results are compared with those obtained by the conventional design procedure. The new algorithm is particularly useful for heavily distorted or fading channels  相似文献   

20.
Entropy-constrained tree-structured vector quantizer design   总被引:1,自引:0,他引:1  
Current methods for the design of pruned or unbalanced tree-structured vector quantizers such as the generalized Breiman-Friedman-Olshen-Stone (GBFOS) algorithm proposed in 1980 are effective, but suffer from several shortcomings. We identify and clarify issues of suboptimality including greedy growing, the suboptimal encoding rule, and the need for time sharing between quantizers to achieve arbitrary rates. We then present the leaf-optimal tree design (LOTD) method which, with a modest increase in design complexity, alters and reoptimizes tree structures obtained from conventional procedures. There are two main advantages over existing methods. First, the optimal entropy-constrained nearest-neighbor rule is used for encoding at the leaves; second, explicit quantizer solutions are obtained at all rates without recourse to time sharing. We show that performance improvement is theoretically guaranteed. Simulation results for image coding demonstrate that close to 1 dB reduction of distortion for a given rate can be achieved by this technique relative to the GBFOS method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号