首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Constrained-storage vector quantization with a universal codebook   总被引:1,自引:0,他引:1  
Many image compression techniques require the quantization of multiple vector sources with significantly different distributions. With vector quantization (VQ), these sources are optimally quantized using separate codebooks, which may collectively require an enormous memory space. Since storage is limited in most applications, a convenient way to gracefully trade between performance and storage is needed. Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook. We propose a new solution based on a size-limited universal codebook that can be viewed as the union of overlapping source codebooks. This framework allows each source codebook to consist of any desired subset of the universal code vectors and provides greater design flexibility which improves the storage-constrained performance. A key feature of this approach is that no two sources need be encoded at the same rate. An additional advantage of the proposed method is its close relation to universal, adaptive, finite-state and classified quantization. Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative design algorithm is introduced to obtain a solution satisfying these conditions. Possible applications of the proposed technique are enumerated, and its effectiveness is illustrated for coding of images using finite-state vector quantization, multistage vector quantization, and tree-structured vector quantization.  相似文献   

2.
Side-match vector quantization (SMVQ) achieves better compression performance than vector quantization (VQ) in image coding due to its exploration of the dependence of adjacent pixels. However, SMVQ has the disadvantage of requiring excessive time during the process of coding. Therefore, this paper proposes a fast image coding algorithm using indirect-index codebook based on SMVQ (ⅡC-SMVQ) to reduce the coding time. Two codebooks, named indirect-index codebook (Ⅱ-codebook) and entire-state codebook (ES-codebook), are trained and utilized. The Ⅱ-codebook is trained by using the Linde-Buzo-Gray (LBG) algorithm from side-match information, while the ES-codebook is generated from the clustered residual blocks on the basis of the Ⅱ-codebook. According to the relationship between these two codebooks, the codeword in the Ⅱ-codebook can be regarded as an indicator to construct a fast search path, which guides in quickly determining the state codebook from the ES-codebook to encode the to-be-encoded block. The experimental results confirm that the coding time of the proposed scheme is shorter than that of the previous SMVQ.  相似文献   

3.
In this paper, a novel algorithm for low-power image coding and decoding is presented and the various inherent trade-offs are described and investigated in detail. The algorithm reduces the memory requirements of vector quantization, i.e., the size of memory required for the codebook and the number of memory accesses by using small codebooks. This significantly reduces the memory-related power consumption, which is an important part of the total power budget. To compensate for the loss of quality introduced by the small codebook size, simple transformations are applied on the codewords during coding. Thus, small codebooks are extended through computations and the main coding task becomes computation-based rather than memory-based. Each image block is encoded by a codeword index and a set of transformation parameters. The algorithm leads to power savings of a factor of 10 in coding and of a factor of 3 in decoding, at least in comparison to classical full-search vector quantization. In terms of SNR, the image quality is better than or comparable to that corresponding to full-search vector quantization, depending on the size of the codebook that is used. The main disadvantage of the proposed algorithm is the decrease of the compression ratio in comparison to vector quantization. The trade-off between image quality and power consumption is dominant in this algorithm and is mainly determined by the size of the codebook.  相似文献   

4.
In this paper, we propose a binary-tree structure neural network model suitable for structured clustering. During and after training, the centroids of the clusters in this model always form a binary tree in the input pattern space. This model is used to design tree search vector quantization codebooks for image coding. Simulation results show that the acquired codebook not only produces better-quality images but also achieves a higher compression ratio than conventional tree search vector quantization. When source coding is applied after VQ, the new model performs better than the generalized Lloyd algorithm in terms of distortion, bits per pixel, and encoding complexity for low-detail and medium-detail images  相似文献   

5.
As linearly constrained vector quantization (LCVQ) is efficient for block-based compression of images that require low complexity decompression, it is a “de facto” standard for three-dimensional (3-D) graphics cards that use texture compression. Motivated by the lack of an efficient algorithm for designing LCVQ codebooks, the generalized Lloyd (1982) algorithm (GLA) for vector quantizer (VQ) codebook improvement and codebook design is extended to a new linearly constrained generalized Lloyd algorithm (LCGLA). This LCGLA improves VQ codebooks that are formed as linear combinations of a reduced set of base codewords. As such, it may find application wherever linearly constrained nearest neighbor (NN) techniques are used, that is, in a wide variety of signal compression and pattern recognition applications that require or assume distributions that are locally linearly constrained. In addition, several examples of linearly constrained codebooks that possess desirable properties such as good sphere packing, low-complexity implementation, fine resolution, and guaranteed convergence are presented. Fast NN search algorithms are discussed. A suggested initialization procedure halves iterations to convergence when, to reduce encoding complexity, the encoder considers the improvement of only a single codebook for each block. Experimental results for image compression show that LCGLA iterations significantly improve the PSNR of standard high-quality lossy 6:1 LCVQ compressed images  相似文献   

6.
Although the continuous hidden Markov model (CHMM) technique seems to be the most flexible and complete tool for speech modelling. It is not always used for the implementation of speech recognition systems because of several problems related to training and computational complexity. Thus, other simpler types of HMMs, such as discrete (DHMM) or semicontinuous (SCHMM) models, are commonly utilised with very acceptable results. Also, the superiority of continuous models over these types of HMMs is not clear. The authors' group has previously introduced the multiple vector quantisation (MVQ) technique, the main feature of which is the use of one separated VQ codebook for each recognition unit. The MVQ technique applied to DHMM models generates a new HMM modelling (basic MVQ models) that allows incorporation into the recognition dynamics of the input sequence information wasted by the discrete models in the VQ process. The authors propose a new variant of HMM models that arises from the idea of applying MVQ to SCHMM models. These are SCMVQ-HMM (semicontinuous multiple vector quantisation HMM) models that use one VQ codebook per recognition unit and several quantisation candidates for each input vector. It is shown that SCMVQ modelling is formally the closest one to CHMM, although requiring even less computation than SCHMMs. After studying several implementation issues of the MVQ technique. Such as which type of probability density function should be used, the authors show the superiority of SCMVQ models over other types of HMM models such as DHMMs, SCHMMs or the basic MVQs  相似文献   

7.
A method for designing codebooks for vector quantization (VQ) based on minimum error visibility in a reconstructed picture is described. The method uses objective measurements to define visibility for the picture being coded. The proposed VQ is switched type, i.e., the codebook is divided into subcodebooks, each of which is related to a given subrange of error visibility. Codebook optimization is carried out on the basis of a particular definition of visible distortion of the reconstructed image. Subjective judgment of the test results, carried out at 0.5 b/pel bit rate, indicates that the proposed VQ enables low-distortion images to be reconstructed even when subcodebooks with a small number of codewords are used, thus reducing the codebook search time to about 10% of that required by a fixed VQ (both inside and outside the training set)  相似文献   

8.
This paper presents novel structured vector quantization (VQ) techniques characterized by the use of linear transformations for the input VQ. The first technique is called the affine transformations VQ, in which the quantized vector is formed by adding the transformed outputs of a multistage codebook rather than just adding the outputs of the stages as in regular multistage vector quantization (MSVQ). The name of the VQ technique comes from the fact that in the two-stage case, the quantized vector is obtained as the result of an affine transformation. This technique can be viewed as a generalized form of MSVQ. If the transformations are constrained to be the identity transformation, this technique becomes identical to the regular MSVQ. The transformations in the introduced technique are selected from a family of linear transformations, represented by a codebook of matrices. In order to I reduce the memory required for storing the matrices, the paper discusses a second technique called scaled rotation matrices VQ, where matrices are constrained to be scaled rotation matrices. Since rotation matrices can be stored by just storing the corresponding rotation angles, this approach enables efficient storage of linear transforms. The design algorithms are based on joint optimization of the linear transformation and the stage codebooks. Experimental results based on speech spectrum quantization show that the proposed VQ techniques outperform the MSVQ of the same bit rate.  相似文献   

9.
黄玲 《信息技术》2004,28(6):1-3,43
分析了遥感图像的统计特性,提出了适合遥感图像压缩的矢量量化与小波变换相结合的压缩方法。该方法将遥感图像小波变换后高频子图划分为一定大小的的像块,对局部相关性较强、灰度变化较小的像块进行高倍压缩;对局部相关性较小、灰度变化较大的像块进行高保真压缩。实验表明,本方法具有良好的压缩性能,适用于遥感图像的压缩。  相似文献   

10.
Proposes an efficient vector quantization (VQ) technique called sequential scalar quantization (SSQ). The scalar components of the vector are individually quantized in a sequence, with the quantization of each component utilizing conditional information from the quantization of previous components. Unlike conventional independent scalar quantization (ISQ), SSQ has the ability to exploit intercomponent correlation. At the same time, since quantization is performed on scalar rather than vector variables, SSQ offers a significant computational advantage over conventional VQ techniques and is easily amenable to a hardware implementation. In order to analyze the performance of SSQ, the authors appeal to asymptotic quantization theory, where the codebook size is assumed to be large. Closed-form expressions are derived for the quantizer mean squared error (MSE). These expressions are used to compare the asymptotic performance of SSQ with other VQ techniques. The authors also demonstrate the use of asymptotic theory in designing SSQ for a practical application (color image quantization), where the codebook size is typically small. Theoretical and experimental results show that SSQ far outperforms ISQ with respect to MSE while offering a considerable reduction in computation over conventional VQ at the expense of a moderate increase in MSE.  相似文献   

11.
姜来  许文焕  纪震  张基宏 《电子学报》2006,34(9):1738-1741
本文给出了一种新的图像矢量量化码书的优化设计方法.传统矢量量化方法只考虑了码字与训练矢量之间的吸引影响,所以约束了最优解的寻解空间.本文提出了一种新的学习机理--模糊强化学习机制,该机制在传统的吸引因子基础上,引入新的排斥因子,极大地释放了吸引因子对最优解的寻解空间的约束.新的模糊强化学习机制没有采用引入随机扰动的方法来避免陷入局部最优码书,而是通过吸引因子和排斥因子的合力作用,较准确地确定了每个码字的最佳移动方向,从而使整体码书向全局最优解靠近.实验结果表明,基于模糊强化学习机制的矢量量化算法始终稳定地取得显著优于模糊K-means算法的性能,较好地解决了矢量量化中的码书设计容易陷入局部极小和初始码书影响优化结果的问题.  相似文献   

12.
田斌  易克初  孙民贵 《电子学报》2000,28(10):12-16
本文提出一种矢量压缩编码新方法—线上投影法.它将输入矢量用它在某条空间直线上的投影近似表示,而用决定这条直线的两个参考点的序号和一个反映该投影点相对于两参考点位置的比例因子作为编码.由于一个大小为N的矢量量化码书中的码字可以确定N(N-1)/2条直线,因此这种方法可用较小的码书获得很高的编码精度.理论分析和实验结果表明:码书大小为N的线上投影法的编码精度与码书大小为N2的矢量量化法相当,并且明显优于用两个大小为N的码书构成的两级矢量量化法,而其码书生成和编码过程的计算复杂度均远远低于后者.它将是矢量信号高精度压缩编码的一种强有力的手段.  相似文献   

13.
Vector quantization (VQ) is an effective image coding technique at low bit rate. The side-match finite-state vector quantizer (SMVQ) exploits the correlations between neighboring blocks (vectors) to avoid large gray level transition across block boundaries. A new adaptive edge-based side-match finite-state classified vector quantizer (classified FSVQ) with a quadtree map has been proposed. In classified FSVQ, blocks are arranged into two main classes, edge blocks and nonedge blocks, to avoid selecting a wrong state codebook for an input block. In order to improve the image quality, edge vectors are reclassified into 16 classes. Each class uses a master codebook that is different from the codebooks of other classes. In our experiments, results are given and comparisons are made between the new scheme and ordinary SMVQ and VQ coding techniques. As is shown, the improvement over ordinary SMVQ is up to 1.16 dB at nearly the same bit rate, moreover, the improvement over ordinary VQ can be up to 2.08 dB at the same bit rate for the image, Lena. Further, block boundaries and edge degradation are less visible because of the edge-vector classification. Hence, the perceptual image quality of classified FSVQ is better than that of ordinary SMVQ.  相似文献   

14.
应用神经网络的图像分类矢量量化编码   总被引:3,自引:0,他引:3  
矢量量化作为一种有效的图像数据压缩技术,越来越受到人们的重视。设计矢量量化器的经典算法LBG算法,由于运算复杂,从而限制了矢量量化的实用性。本文讨论了应用神经网络实现的基于边缘特征分类的矢量量化技术。它是根据人的视觉系统对图象的边缘的敏感性,应用模式识别技术,在对图像编码前,以边缘为特征对图像内容分类,然后再对每类进行矢量量化。除特征提取是采用离散余弦变换外,图像的分类和矢量量化都是由神经网络完成  相似文献   

15.
This paper discusses some algorithms to be used for the generation of an efficient and robust codebook for vector quantization (VQ). Some of the algorithms reduce the required codebook size by 4 or even 8 b to achieve the same level of performance as some of the popular techniques. This helps in greatly reducing the complexity of codebook generation and encoding. We also present a new adaptive tree search algorithm which improves the performance of any product VQ structure. Our results show an improvement of nearly 3 dB over the fixed rate search algorithm at a bit rate of 0.75 b/pixel  相似文献   

16.
张颖  余英林  布礼文 《通信学报》1998,19(11):76-81
本文提出了基于仿射变换的改进型矢量量化编码算法,并给出了两种不同的实用结构,与传统矢量量化算法相比,该方法在不需要重新训练新码本及不增加码本存储空间的情况下,降低了编码误差,使得重建图像的PSNR显著增加,图像的主观质量也得到很大的改善。  相似文献   

17.
In the present paper we study the use of vector quantization in the BTC-VQ image compression system. We propose an inverted order of proceeding in the BTC-VQ algorithm, so that the interaction of coding the bit-plane and the quantization data will be taken into consideration. The quality of the image depends radically on the codebook used in VQ. The use of frequencies in the selection of the initial codebook turns out to be superior to random selection.  相似文献   

18.
周文文  董恩清 《通信技术》2009,42(3):233-235
图像矢量量化(VQ)是图像压缩算法中的重要环节,在VQ中起决定性因素的是构造出性能优异的码书。为改善矢量量化码书的性能,文中在分析Kohonen自组织特征映射(SOFM)的基础上,提出一种识别距离SOFM的算法,同时将矢量量化应用于图像的小波变换域。测试结果表明,改进的算法使码书设计的计算量得到明显的降低,而且码书的性能得到了提高。  相似文献   

19.
The author considers vector quantization that uses the L (1) distortion measure for its implementation. A gradient-based approach for codebook design that does not require any multiplications or median computation is proposed. Convergence of this method is proved rigorously under very mild conditions. Simulation examples comparing the performance of this technique with the LBG algorithm show that the gradient-based method, in spite of its simplicity, produces codebooks with average distortions that are comparable to the LBG algorithm. The codebook design algorithm is then extended to a distortion measure that has piecewise-linear characteristics. Once again, by appropriate selection of the parameters of the distortion measure, the encoding as well as the codebook design can be implemented with zero multiplications. The author applies the techniques in predictive vector quantization of images and demonstrates the viability of multiplication-free predictive vector quantization of image data.  相似文献   

20.
This paper evaluates the performance of an image compression system based on wavelet-based subband decomposition and vector quantization. The images are decomposed using wavelet filters into a set of subbands with different resolutions corresponding to different frequency bands. The resulting subbands are vector quantized using the Linde-Buzo-Gray (1980) algorithm and various fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive neural network through an unsupervised learning process. The quality of the multiresolution codebooks designed by these algorithms is measured on the reconstructed images belonging to the training set used for multiresolution codebook design and the reconstructed images from a testing set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号