首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Alphabet-constrained rate-distortion theory is extended to coding of sources with memory. Two different cases are considered: when only the size of the codebook is constrained and when the codevector values are also held fixed. For both cases, nth-order constrained-alphabet rate-distortion functions are defined and a convergent algorithm for their evaluation is presented. Specific simulations using AR(1) sources show that performance near the rate-distortion bound is possible using a reproduction alphabet consisting of a small number of codevectors. It is also shown that the additional constraint of holding the codevector values fixed does not degrade performance of the coder in relation to the size-only constrained case. This observation motivates the development of a fixed-codebook vector quantizer, called the alphabet- and entropy-constrained vector quantizer, the performance of which is comparable to the entropy-constrained vector quantizer. A number of examples using an AR(1) and a speech source are presented to corroborate the theory  相似文献   

2.
3.
Trellis-coded vector quantization   总被引:5,自引:0,他引:5  
Trellis-coded quantization is generalized to allow a vector reproduction alphabet. Three encoding structures are described, several encoder design rules are presented, and two design algorithms are developed. It is shown that for a stationary ergodic vector source, if the optimized trellis-coded vector quantization reproduction process is jointly stationary and ergodic with the source, then the quantization noise is zero-mean and of a variance equal to the difference between the source variance and the variance of the reproduction sequence. Several examples illustrate the encoder design procedure and performance  相似文献   

4.
5.
Gersho  A. 《电信纪事》1986,41(9-10):470-480
Annals of Telecommunications - In adaptive quantization, the parameters of a quantizer are updated during real-time operation based on observed information regarding the statistics of the signal...  相似文献   

6.
We present here design techniques for trellis-coded vector quantizers with symmetric codebooks that facilitate low-complexity quantization as well as partitioning into equiprobable sets for trellis coding. The quantization performance of this coder on the independently identically distributed (i.i.d.) Laplacian source matches the performance of trellis-based scalar-vector quantization (TB-SVQ), but requires less computational complexity  相似文献   

7.
Tree-structured vector quantizers (TSVQ) and their variants have recently been proposed. All trees used are fixed M-ary tree structured, such that the training samples in each node must be artificially divided into a fixed number of clusters. This paper proposes a variable-branch tree-structured vector quantizer (VBTSVQ) based on a genetic algorithm, which searches for the number of child nodes of each splitting node for optimal coding in VBTSVQ. Moreover, one disadvantage of TSVQ is that the searched codeword usually differs from the full searched codeword. Briefly, the searched codeword in TSVQ sometimes is not the closest codeword to the input vector. This paper proposes the multiclassification encoding method to select many classified components to represent each cluster, and the codeword encoded in the VBTSVQ is usually the same as that of the full search. VBTSVQ outperforms other TSVQs in the experiments presented here.  相似文献   

8.
A vector generalization of trellis coded quantization (TCQ), called trellis coded vector quantization (TCVQ), and experimental results showing its performance for an i.i.d. Gaussian source are presented. For a given rate, TCVQ yields a lower distortion that TCQ at the cost of an increase in implementation complexity. In addition, TCVQ allows fractional rates, which TCQ does not  相似文献   

9.
A vector quantization scheme based on the classified vector quantization (CVQ) concept, called predictive classified vector quantization (PCVQ), is presented. Unlike CVQ where the classification information has to be transmitted, PCVQ predicts it, thus saving valuable bit rate. Two classifiers, one operating in the Hadamard domain and the other in the spatial domain, were designed and tested. The classification information was predicted in the spatial domain. The PCVQ schemes achieved bit rate reductions over the CVQ ranging from 20 to 32% for two commonly used color test images while maintaining the same acceptable image quality. Bit rates of 0.70-0.93 bits per pixel (bpp) were obtained depending on the image and PCVQ scheme used.  相似文献   

10.
A novel fuzzy clustering algorithm for the design of channel-optimized source coding systems is presented in this letter. The algorithm, termed fuzzy channel-optimized vector quantizer (FCOVQ) design algorithm, optimizes the vector quantizer (VQ) design using a fuzzy clustering process in which the index crossover probabilities imposed by a noisy channel are taken into account. The fuzzy clustering process effectively enhances the robustness of the performance of VQ to channel noise without reducing the quantization accuracy. Numerical results demonstrate that the FCOVQ algorithm outperforms existing VQ algorithms under noisy channel conditions for both Gauss-Markov sources and still image data  相似文献   

11.
二维网格编码矢量量化及其在静止图像量化中的应用   总被引:1,自引:0,他引:1  
该文提出了在二维码书空间中,在矢量量化(VQ)的基础上,应用网格编码量化(TCQ)的思想来实现量化的新方法--二维网格编码矢量量化(2D-TCVQ)。该方法首先把小码书扩展成大的虚码书,然后用网格编码矢量量化(TCVQ)的方法在扩大的二维码书空间中用维物比算法来寻找最佳量化路径。码书扩大造成第一子集最小失真减小从提高了量化性能。由于二维TCVQ采用的码书尺寸较小,因而可以应用到低存贮、低功耗的编解码环境。仿真结果表明,同一码书尺寸下,二维TCVQ比TCVQ好0.5dB左右。同时,该方法具有计算量适中,解码简单以及对误差扩散不敏感的优点。  相似文献   

12.
We investigate high-rate quantization for various detection and reconstruction loss criteria. A new distortion measure is introduced which accounts for global loss in best attainable binary hypothesis testing performance. The distortion criterion is related to the area under the receiver-operating-characteristic (ROC) curve. Specifically, motivated by Sanov's theorem, we define a performance curve as the trajectory of the pair of optimal asymptotic Type I and Type II error rates of the most powerful Neyman-Pearson test of the hypotheses. The distortion measure is then defined as the difference between the area-under-the-curve (AUC) of the optimal pre-encoded hypothesis test and the AUC of the optimal post-encoded hypothesis test. As compared to many previously introduced distortion measures for decision making, this distortion measure has the advantage of being independent of any detection thresholds or priors on the hypotheses, which are generally difficult to specify in the code design process. A high-resolution Zador-Gersho type of analysis is applied to characterize the point density and the inertial profile associated with the optimal high-rate vector quantizer. The analysis applies to a restricted class of high-rate quantizers that have bounded cells with vanishing volumes. The optimal point density is used to specify a Lloyd-type algorithm which allocates its finest resolution to regions where the gradient of the pre-encoded likelihood ratio has greatest magnitude.  相似文献   

13.
本文通过将全搜索矢量量化算法(Full Search Vector Quantization)的计算转换成内积(inner product)运算,并利用Baugh-Wooley算法,阐述了FSVQ算法的一种新的有效的基于二进制补码的VLSI实现结构。由于该结构的规则性(regularity)和模块性(modularity),它可以被高效地应用在语音、图像、和视频编码的VLSI实现中。  相似文献   

14.
A process by which a reduced-dimensionality feature vector can be extracted from a high-dimensionality signal vector and then vector quantized with lower complexity than direct quantization of the signal vector is discussed. In this procedure, a receiver must estimate, or interpolate, the signal vector from the quantized features. The task of recovering a high-dimensional signal vector from a reduced-dimensionality feature vector can be viewed as a generalized form of interpolation or prediction. A way in which optimal nonlinear interpolation can be achieved with negligible complexity, eliminating the need for ad hoc linear or nonlinear interpolation techniques, is presented. The range of applicability of nonlinear interpolative vector quantization is illustrated with examples in which optimal nonlinear estimation from quantized data is needed for efficient signal compression  相似文献   

15.
The real time implementation of an efficient signal compression technique, Vector Quantization (VQ), is of great importance to many digital signal coding applications. In this paper, we describe a new family of bit level systolic VLSI architectures which offer an attractive solution to this problem. These architectures are based on a bit serial, word parallel approach and high performance and efficiency can be achieved for VQ applications of a wide range of bandwidths. Compared with their bit parallel counterparts, these bit serial circuits provide better alternatives for VQ implementations in terms of performance and cost.  相似文献   

16.
Lattice vector quantization (LVQ) solves the complexity problem of LBG based vector quantizers, yielding very general codebooks. However, a single stage LVQ, when applied to high resolution quantization of a vector, may result in very large and unwieldy indices, making it unsuitable for applications requiring successive refinement. The goal of this work is to develop a unified framework for progressive uniform quantization of vectors without having to sacrifice the mean- squared-error advantage of lattice quantization. A successive refinement uniform vector quantization methodology is developed, where the codebooks in successive stages are all lattice codebooks, each in the shape of the Voronoi regions of the lattice at the previous stage. Such Voronoi shaped geometric lattice codebooks are named Voronoi lattice VQs (VLVQ). Measures of efficiency of successive refinement are developed based on the entropy of the indices transmitted by the VLVQs. Additionally, a constructive method for asymptotically optimal uniform quantization is developed using tree-structured subset VLVQs in conjunction with entropy coding. The methodology developed here essentially yields the optimal vector counterpart of scalar "bitplane-wise" refinement. Unfortunately it is not as trivial to implement as in the scalar case. Furthermore, the benefits of asymptotic optimality in tree-structured subset VLVQs remain elusive in practical nonasymptotic situations. Nevertheless, because scalar bitplane- wise refinement is extensively used in modern wavelet image coders, we have applied the VLVQ techniques to successively refine vectors of wavelet coefficients in the vector set-partitioning (VSPIHT) framework. The results are compared against SPIHT and the previous successive approximation wavelet vector quantization (SA-W-VQ) results of Sampson, da Silva and Ghanbari (1996).  相似文献   

17.
A new interband vector quantization of a human vision-based image representation is presented. The feature specific vector quantizer (FVQ) is suited for data compression beyond second-order decorrelation. The scheme is derived from statistical investigations of natural images and the processing principles of biological vision systems, the initial stage of the coding algorithm is a hierarchical, and orientation-selective, analytic bandpass decomposition, realized by even- and odd-symmetric filter pairs that are modeled after the simple cells of the visual cortex. The outputs of each even- and odd-symmetric filter pair are interpreted as real and imaginary parts of an analytic bandpass signal, which is transformed into a local amplitude and a local phase component according to the operation of cortical complex cells. Feature-specific multidimensional vector quantization is realized by combining the amplitude/phase samples of all orientation filters of one resolution layer. The resulting vectors are suited for a classification of the local image features with respect to their intrinsic dimensionality, and enable the exploitation of higher order statistical dependencies between the subbands. This final step is closely related to the operation of cortical hypercomplex or end-stopped cells. The codebook design is based on statistical as well as psychophysical and neurophysiological considerations, and avoids the common shortcomings of perceptually implausible mathematical error criteria. The resulting perceptual quality of compressed images is superior to that obtained with standard vector quantizers of comparable complexity.  相似文献   

18.
The performance of a lattice-based fast vector quantization (VQ) method, which yields rate-distortion performance to that of an optimal VQ, is analyzed. The method, which is a special case of fine-coarse vector quantization (FCVQ), uses the cascade of a fine lattice quantizer and a coarse optimal VQ to encode a given source vector. The second stage is implemented in the form of a lookup table, which needs to be stored at the encoder. The arithmetic complexity of this method is essentially that of lattice VQ. Its distortion can be made arbitrarily close to that of an optimal VQ, provided sufficient storage for the table is available. It is shown that the distortion of lattice-based FCVQ is larger than that of full search quantization by an amount that decreases as the square of the diameter of the lattice cells, and it provides exact formulas for the asymptotic constant of proportionality in terms of the properties of the lattice, coarse codebook, and source density. It is shown that the excess distortion asymptotically equals that of the fine quantizer. Simulation results indicate how small the lattice cells must be in order for the asymptotic formulas to be applicable  相似文献   

19.
Inverse error-diffusion using classified vector quantization   总被引:1,自引:0,他引:1  
This correspondence extends and modifies classified vector quantization (CVQ) to solve the problem of inverse halftoning. The proposed process consists of two phases: the encoding phase and decoding phase. The encoding procedure needs a codebook for the encoder which transforms a halftoned image to a set of codeword-indices. The decoding process also requires a different codebook for the decoder which reconstructs a gray-scale image from a set of codeword-indices. Using CVQ, the reconstructed gray-scale image is stored in compressed form and no further compression may be required. This is different from the existing algorithms, which reconstructed a halftoned image in an uncompressed form. The bit rate of encoding a reconstructed image is about 0.51 b/pixel.  相似文献   

20.
Using vector quantization for image processing   总被引:1,自引:0,他引:1  
A review is presented of vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, which is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks, such as enhancement classification, halftoning, and edge detection, and to reduce the computational complexity by performing the tasks simultaneously with the compression. The fundamental ideas of vector quantization are explained, and vector quantization algorithms that perform image processing are surveyed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号