首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
周文文  董恩清 《通信技术》2009,42(3):233-235
图像矢量量化(VQ)是图像压缩算法中的重要环节,在VQ中起决定性因素的是构造出性能优异的码书。为改善矢量量化码书的性能,文中在分析Kohonen自组织特征映射(SOFM)的基础上,提出一种识别距离SOFM的算法,同时将矢量量化应用于图像的小波变换域。测试结果表明,改进的算法使码书设计的计算量得到明显的降低,而且码书的性能得到了提高。  相似文献   

2.
采用空间矢量组合的小波图像分类矢量量化   总被引:3,自引:0,他引:3  
该文提出了采用空间矢量组合对小波图像进行分类矢量量化的新方法。该方法充分利用了各高频子带系数的频率相关性和空间约束性将子带系数重组,依据组合矢量能量和零树矢量综合判定进行分类,整幅图像只需单一量化码书,分类信息占用比特数少,并采用了基于人眼视觉特性的加权均方误差准则进行矢量量化,提高了量化增益。仿真结果表明,该方法实现简单,在较低的编码率下,可达到很好的压缩效果。  相似文献   

3.
二维网格编码矢量量化及其在静止图像量化中的应用   总被引:1,自引:0,他引:1  
该文提出了在二维码书空间中,在矢量量化(VQ)的基础上,应用网格编码量化(TCQ)的思想来实现量化的新方法--二维网格编码矢量量化(2D-TCVQ)。该方法首先把小码书扩展成大的虚码书,然后用网格编码矢量量化(TCVQ)的方法在扩大的二维码书空间中用维物比算法来寻找最佳量化路径。码书扩大造成第一子集最小失真减小从提高了量化性能。由于二维TCVQ采用的码书尺寸较小,因而可以应用到低存贮、低功耗的编解码环境。仿真结果表明,同一码书尺寸下,二维TCVQ比TCVQ好0.5dB左右。同时,该方法具有计算量适中,解码简单以及对误差扩散不敏感的优点。  相似文献   

4.
基于小波变换的图像矢量量化   总被引:9,自引:0,他引:9  
在基于小波变换的图像矢量量化研究中,本文综合考虑了高频小波系数的各种特性,吸收已有的优秀成果,在全面研究了矢量的形成、初始码书的设计、距离的选取,以及同一方向高频子图之间的相似性等方面的基础上,设计出一种有效的量化算法.  相似文献   

5.
郑勇  何宁  朱维乐 《信号处理》2001,17(6):498-505
本文基于零树编码、矢量分类和网格编码量化的思想,提出了对小波图像采用空间矢量组合和分类后进行网格编码矢量量化的新方法.该方法充分利用了各高频子带系数频率相关性和空间约束性,依据组合矢量能量和零树矢量综合判定进行分类,整幅图像只需单一量化码书,分类信息占用比特数少.对重要类矢量实行加权网格编码矢量量化,利用卷积编码扩展信号空间以增大量化信号间的欧氏距离,用维特比算法搜索最优量化序列,比使用矢量量化提高了0.6db左右.该方法编码计算复杂度适中,解码简单,可达到很好的压缩效果.  相似文献   

6.
针对彩色视频图像提出了一种DCT域内基于矢量量化的高效编码方法。为去掉彩色图像各分量间的相关性,首先将图像由RGB空间转换到YUV空间,然后根据人类视觉特征(HVS)对色度信号U、V进行了亚采样和平均化处理;对亮度信号Y则进行分块DCT变换,并根据HVS特征对变化域内的块矢量进行自适应分类,然后根据矢量的类型分别构造码矢和进行全局码书设计。提出的全局码书设计方案可以根据帧间相关性及码字使用频率,对码书的内容自动进行更新和替换,以适应场景内容的变化。实验结果表明:在保证图像重建质量的前提下,本文提出的方法具有较高的压缩效率,比较适合于视频会议以及水下视频观测等应用场合。  相似文献   

7.
李殷  李飞 《电视技术》2012,36(17):26-29
鉴于经典的LBG码书设计算法易陷入局部最优解,将量子粒子群优化算法应用到图像矢量量化码书设计中,提出一种基于量子粒子群的矢量量化码书设计算法(QPSO-VQ)。在该算法中,用粒子表示码书,用峰值信噪比(PSNR)作为算法的适应度函数,通过量子粒子群算法的更新公式来更新码书。实验结果表明,与经典的LBG码书设计算法和粒子群矢量量化码书设计算法相比,QPSO-VQ在解码图像的PSNR值和算法的稳定度等方面有比较明显的优势,可以获得性能较好的码书。  相似文献   

8.
基于自组织特征映射的图像矢量量化研究   总被引:4,自引:0,他引:4  
本文从自组织特征映射(SOFM)的基本思想出发,通过研究Kohonen网的输出节点在一维、二维和八维空间中不同排列方式,得到了相应的矢量量化(VQ)码书设计算法。研究表明SOFM具有许多优点:可以设计出具有规则结构的码书,相邻码矢量具有较强的相关性;网络输出节点的不同排列方式对矢量量化器性能有较大影响,通过选择合适的排列方式,设计出的矢量量化器具有良好的抗信道误码能力。实验表明基于SOFM算法的矢  相似文献   

9.
一种指数型模糊学习矢量量化图像编码算法   总被引:6,自引:0,他引:6  
本文分析了模糊矢量量化(FVQ)图像编码的原理,提出了一种指数型模糊学习矢量量化算法(EFLVQ)。实验结果表明,该算法具有快速收敛性能,设计的图像码书峰值信噪比与FVQ算法相比也略有改善。  相似文献   

10.
用于图像编码的相关矢量量化研究   总被引:10,自引:2,他引:8  
王卫  蔡德钧 《电子学报》1995,23(4):30-34
当相邻的图像块用矢量量化(VQ)编码时可能出现编码地址相同的情况,尤其是在图像的平滑区。为了减少相邻块间编码地址的相关性,本文提出了一种相关矢量量化方案,采用相关码书与改进的自组织特征映射(ISOFM)码书同时编码一个窗口内的四个邻域块,与无记忆类VQ相比,对一幅典型的“Lenna”图象,编码过程中所需计算量减少一半,比特率减少40%,由于在Kohonen自组织神经网络的训练过程中,对边缘类矢量采  相似文献   

11.
Constrained-storage vector quantization with a universal codebook   总被引:1,自引:0,他引:1  
Many image compression techniques require the quantization of multiple vector sources with significantly different distributions. With vector quantization (VQ), these sources are optimally quantized using separate codebooks, which may collectively require an enormous memory space. Since storage is limited in most applications, a convenient way to gracefully trade between performance and storage is needed. Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook. We propose a new solution based on a size-limited universal codebook that can be viewed as the union of overlapping source codebooks. This framework allows each source codebook to consist of any desired subset of the universal code vectors and provides greater design flexibility which improves the storage-constrained performance. A key feature of this approach is that no two sources need be encoded at the same rate. An additional advantage of the proposed method is its close relation to universal, adaptive, finite-state and classified quantization. Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative design algorithm is introduced to obtain a solution satisfying these conditions. Possible applications of the proposed technique are enumerated, and its effectiveness is illustrated for coding of images using finite-state vector quantization, multistage vector quantization, and tree-structured vector quantization.  相似文献   

12.
We introduce a universal quantization scheme based on random coding, and we analyze its performance. This scheme consists of a source-independent random codebook (typically mismatched to the source distribution), followed by optimal entropy coding that is matched to the quantized codeword distribution. A single-letter formula is derived for the rate achieved by this scheme at a given distortion, in the limit of large codebook dimension. The rate reduction due to entropy coding is quantified, and it is shown that it can be arbitrarily large. In the special case of "almost uniform" codebooks (e.g., an independent and identically distributed (i.i.d.) Gaussian codebook with large variance) and difference distortion measures, a novel connection is drawn between the compression achieved by the present scheme and the performance of "universal" entropy-coded dithered lattice quantizers. This connection generalizes the "half-a-bit" bound on the redundancy of dithered lattice quantizers. Moreover, it demonstrates a strong notion of universality where a single "almost uniform" codebook is near optimal for any source and any difference distortion measure. The proofs are based on the fact that the limiting empirical distribution of the first matching codeword in a random codebook can be precisely identified. This is done using elaborate large deviations techniques, that allow the derivation of a new "almost sure" version of the conditional limit theorem.  相似文献   

13.
The authors introduce an image coding method which unifies two image coding techniques: variable-length transform coding (VLTC) and image-adaptive vector quantization (IAVQ). In both VLTC and IAVQ, the image is first decomposed into a set of blocks. VLTC encodes each block in the transform domain very efficiently: however, it ignores the interblock correlation completely. IAVQ addresses the interblock correlation by using a codebook generated from a subset of the blocks to vector-quantize all blocks. Although the resulting codebook represents the input image better than a universal codebook generated from a large number of training images, it has to be transmitted separately as an overhead, therefore degrading the coding performance at high bit rates  相似文献   

14.
A new on-line universal lossy data compression algorithm is presented. For finite memoryless sources with unknown statistics, its performance asymptotically approaches the fundamental rate distortion limit. The codebook is generated on the fly, and continuously adapted by simple rules. There is no separate codebook training or codebook transmission. Candidate codewords are randomly generated according to an arbitrary and possibly suboptimal distribution. Through a carefully designed “gold washing” or “information-theoretic sieve” mechanism, good codewords and only good codewords are promoted to permanent status with high probability. We also determine the rate at which our algorithm approaches the fundamental limit  相似文献   

15.
Universal trellis coded quantization   总被引:2,自引:0,他引:2  
A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.  相似文献   

16.
We characterize the best achievable performance of lossy compression algorithms operating on arbitrary random sources, and with respect to general distortion measures. Direct and converse coding theorems are given for variable-rate codes operating at a fixed distortion level, emphasizing: (a) nonasymptotic results, (b) optimal or near-optimal redundancy bounds, and (c) results with probability one. This development is based in part on the observation that there is a precise correspondence between compression algorithms and probability measures on the reproduction alphabet. This is analogous to the Kraft inequality in lossless data compression. In the case of stationary ergodic sources our results reduce to the classical coding theorems. As an application of these general results, we examine the performance of codes based on mixture codebooks for discrete memoryless sources. A mixture codebook (or Bayesian codebook) is a random codebook generated from a mixture over some class of reproduction distributions. We demonstrate the existence of universal mixture codebooks, and show that it is possible to universally encode memoryless sources with redundancy of approximately (d/2) log n bits, where d is the dimension of the simplex of probability distributions on the reproduction alphabet.  相似文献   

17.
The full diversity gain provided by a multi-antenna channel can be achieved by transmit beamforming and receive combining. This requires the knowledge of channel state information (CSI) at the transmitter which is difficult to obtain in practice. Quantized beamforming where fixed codebooks known at both the transmitter and the receiver are used to quantize the CSI has been proposed to solve this problem. Most recent works focus attention on limited feedback codebook design for the uncorrelated Rayleigh fading channel. Such designs are sub-optimal when used in correlated channels. In this paper, we propose systematic codebook design for correlated channels when channel statistical information is known at the transmitter. This design is motivated by studying the performance of pure statistical beamforming in correlated channels and is implemented by maps that can rotate and scale spherical caps on the Grassmannian manifold. Based on this study, we show that even statistical beamforming is near-optimal if the transmitter covariance matrix is ill-conditioned and receiver covariance matrix is well-conditioned. This leads to a partitioning of the transmit and receive covariance spaces based on their conditioning with variable feedback requirements to achieve an operational performance level in the different partitions. When channel statistics are difficult to obtain at the transmitter, we propose a universal codebook design (also implemented by the rotation-scaling maps) that is robust to channel statistics. Numerical studies show that even few bits of feedback, when applied with our designs, lead to near perfect CSI performance in a variety of correlated channel conditions.  相似文献   

18.
Two universal lossy data compression schemes, one with fixed rate and the other with fixed distortion, are presented, based on the well-known Lempel-Ziv algorithm. In the case of fixed rate R, the universal lossy data compression scheme works as follows: first pick a codebook Bn consisting of all reproduction sequences of length n whose Lempel-Ziv codeword length is ⩽nR, and then use Bn to encode the entire source sequence n-block by n-block. This fixed-rate data compression scheme is universal in the sense that for any stationary, ergodic source or for any individual sequence, the sample distortion performance as n→∞ is given almost surely by the distortion rate function. A similar result is shown in the context of fixed distortion lossy source coding  相似文献   

19.
This paper discusses some algorithms to be used for the generation of an efficient and robust codebook for vector quantization (VQ). Some of the algorithms reduce the required codebook size by 4 or even 8 b to achieve the same level of performance as some of the popular techniques. This helps in greatly reducing the complexity of codebook generation and encoding. We also present a new adaptive tree search algorithm which improves the performance of any product VQ structure. Our results show an improvement of nearly 3 dB over the fixed rate search algorithm at a bit rate of 0.75 b/pixel  相似文献   

20.
高效的模糊聚类初始码书生成算法   总被引:2,自引:0,他引:2  
码书设计在矢量量化中至关重要,而多数码书设计算法都是基于初始码书的.从经典的LBG算法的缺陷出发,提出一种基于模糊聚类的高效初始码书生成算法,通过将初始码书的码矢在输入矢量空间中很好地散开,并尽可能占据输入概率密度较大的区域,从而使之后的LBG算法避免陷入局部最优,设计出的码书性能更好,更加接近全局最优,同时加快了收敛速度,减少了迭代次数.将该算法应用于图像编码的实验中,结果表明:该算法能够从效率和质量两方面有效地提高矢量量化的性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号