首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
该文提出了一种基于双正交小波变换(BWT)和模糊矢量量化(FVQ)的极低比特率图像编码算法。该算法通过构造符合图像小波变换系数特征的跨频带矢量,充分利用了不同频带小波系数之间的相关性,有效地提高了图像的编码效率和重构质量。该算法采用非线性插补矢量量化(NLIVQ)的思想,从大维数矢量中提取小维数的特征矢量,并提出了一种新的模糊矢量量化方法一渐进构造模糊聚类(PCFC)算法用于特征矢量的量化,从而大大提高了矢量量化的速度和码书质量。实验结果证明,该算法在比特率为0.172bpp的条件下仍能获得PSNR>30dB的高质量重构图像。  相似文献   

2.
该文提出了一种基于双正交小波变换(BWT)和模糊矢量量化(FVQ)的极低比特率图像编码算法。该算法通过构造符合图像小波变换系数特征的跨频带矢量,充分利用了不同频带小波系数之间的相关性,有效地提高了图像的编码效率和重构质量。  相似文献   

3.
方涛  郭达志 《电子学报》1998,26(4):12-14,23
图像的小波变换能同时提供空间-频率局部化信息,而且小波变换域内矢量量化数据压缩已得到广泛应用,经过小波变换后,各子带小波分量存在相关性和空间约束,同时考虑到人类视觉对水平和垂直方向高频分量比对角方向更加敏感,本文提出了基于空间约束的矢量量化方法,该算法能同时提高编码效率和改善重构图像质量。  相似文献   

4.
利用小波变换和约束矩阵进行图像压缩编码   总被引:2,自引:0,他引:2  
何立  王延平 《电子学报》1995,23(4):20-23
本文将小波变换和矢量量化相结合对图像进行压缩编码,分析了图像经小波变换后的数据结构特性以及各小波分量之间的相关性。这种相关性的存在是由于小波变换具有良好的空-频域局部化特性。本文构造了结构约束矩阵来描述这种相关性,并以此为基础对传统的矢量量化算法进行了改进,改进后的算法减小了计算量,降低了码率。  相似文献   

5.
郑勇  何宁  朱维乐 《信号处理》2001,17(6):498-505
本文基于零树编码、矢量分类和网格编码量化的思想,提出了对小波图像采用空间矢量组合和分类后进行网格编码矢量量化的新方法.该方法充分利用了各高频子带系数频率相关性和空间约束性,依据组合矢量能量和零树矢量综合判定进行分类,整幅图像只需单一量化码书,分类信息占用比特数少.对重要类矢量实行加权网格编码矢量量化,利用卷积编码扩展信号空间以增大量化信号间的欧氏距离,用维特比算法搜索最优量化序列,比使用矢量量化提高了0.6db左右.该方法编码计算复杂度适中,解码简单,可达到很好的压缩效果.  相似文献   

6.
提出一种基于分割的小波编码方法.该方法将一幅图像分割成两个区域:静止区域(背景)和包含边缘信息的区域(前景),然后对每个区域以最优编码器分别进行编码.背景区域小波变换后,用SPIHT算法对其编码.对前景区域使用PVQ(预测矢量量化)进行编码.实验结果表明,与SPIHT算法相比,在相同压缩比的情况下,该方法改善了重构图像客观(均方误差,MSE)和主观(视觉)的质量.  相似文献   

7.
一种基于小波变换和矢量量化的图像压缩算法   总被引:1,自引:0,他引:1  
小波变换和矢量量化都是图像压缩中的重要方法。利用小波变换的系数特点,对图像进行小渡分解,对于能量最为集中的低频分量采用标量量化处理,然后将标量量化过程中产生的残差和高频分量一起构造矢量,进行矢量量化。实验结果表明,此算法能够有效提高重构图像质量,获得较高的信噪比。  相似文献   

8.
崔宝侠  段勇 《电视技术》2003,(9):12-13,20
提出一种有效的图像压缩方法,利用小波变换对图像进行多分辨率分解,对小波系数进行矢量量化(VQ)编码。使用遗传算法(GA)与模糊c均值聚类(FCM)算法相结合的方法来设计码书,有效地克服了FCM算法容易陷入局部最优且对初始值敏感的缺点。实验结果表明,该算法可较大提高图像重构质量。  相似文献   

9.
本文提出了基于双正交小流变换和格型矢量量化的视频编码算法,在该方案中,小波变换将图像分解成多分辩率的子带图像,多分辩率运动估值技术实现子带图像的帧间预测,格型徉量量化对预测差值子带图像进行编码,从而获得了性能较好的活动图像编码新算法。  相似文献   

10.
采用空间矢量组合的小波图像分类矢量量化   总被引:3,自引:0,他引:3  
该文提出了采用空间矢量组合对小波图像进行分类矢量量化的新方法。该方法充分利用了各高频子带系数的频率相关性和空间约束性将子带系数重组,依据组合矢量能量和零树矢量综合判定进行分类,整幅图像只需单一量化码书,分类信息占用比特数少,并采用了基于人眼视觉特性的加权均方误差准则进行矢量量化,提高了量化增益。仿真结果表明,该方法实现简单,在较低的编码率下,可达到很好的压缩效果。  相似文献   

11.
一种结合人脸检测的小波图像编码方法   总被引:9,自引:2,他引:7  
本文基于人脸检测的基础上,提出了一种结合矢量量化(VQ)的小波图像编码方法。该方法充分利用人眼的视觉特性,高压缩比时恢复图像仍能保持较好的主观质量。  相似文献   

12.
In this paper, we propose an image coding scheme by using the variable blocksize vector quantization (VBVQ) to compress wavelet coefficients of an image. The scheme is capable of finding an optimal quadtree segmentation of wavelet coefficients of an image for VBVQ subject to a given bit budget, such that the total distortion of quantized wavelet coefficients is minimal. From our simulation results, we can see that our proposed coding scheme has higher performance in PSNR than other wavelet/VQ or subband/VQ coding schemes.  相似文献   

13.
The generalization of gain adaptation to vector quantization (VQ) is explored in this paper and a comprehensive examination of alternative techniques is presented. We introduce a class of adaptive vector quantizers that can dynamically adjust the "gain" or amplitude scale of code vectors according to the input signal level. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ encoding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. Gain-adaptive VQ can be used alone for "vector PCM" coding (i.e., direct waveform VQ) or as a building block in other vector coding schemes. The design algorithm for generating the appropriate gain-normalized VQ codebook is introduced. When applied to speech coding, gain-adaptive VQ achieves significant performance improvement over fixed VQ with a negligible increase in complexity.  相似文献   

14.
应用神经网络的图像分类矢量量化编码   总被引:3,自引:0,他引:3  
矢量量化作为一种有效的图像数据压缩技术,越来越受到人们的重视。设计矢量量化器的经典算法LBG算法,由于运算复杂,从而限制了矢量量化的实用性。本文讨论了应用神经网络实现的基于边缘特征分类的矢量量化技术。它是根据人的视觉系统对图象的边缘的敏感性,应用模式识别技术,在对图像编码前,以边缘为特征对图像内容分类,然后再对每类进行矢量量化。除特征提取是采用离散余弦变换外,图像的分类和矢量量化都是由神经网络完成  相似文献   

15.
The encoding of vector quantization (VQ) needs expensive computation for searching the closest codevector to the input vector. This paper presents several fast encoding algorithms based on multiple triangle inequalities and wavelet transform to overcome this problem. The multiple triangle inequalities confine a search range using the intersection of search areas generated from several control vectors. A systematic way for designing the control vectors is also presented. The wavelet transform combined with the partial distance elimination is used to reduce the computational complexity of the distance calculation of vectors. The proposed algorithms provide the same coding quality as the full search method. The experimental results indicate that the new algorithms perform more efficiently than existing algorithms.  相似文献   

16.
In this paper, we propose a joint source channel coding (JSCC) scheme to the transmission of fixed images for wireless communication applications. The ionospheric channel which presents some characteristics identical to those found on mobile radio channels, like fading, multipath and Doppler effect is our test channel. As this method based on a wavelet transform, a self-organising map (SOM) vector quantization (VQ) optimally mapped on a QAM digital modulation and an unequal error protection (UEP) strategy, this method is particularly well adapted to low bit-rate applications. The compression process consists in applying a SOM VQ on the discrete wavelet transform coefficients and computing several codebooks depending on the sub-images preserved. An UEP is achieved with a correcting code applied on the most significant data. The JSCC consists of an optimal mapping of the VQ codebook vectors on a high spectral efficiency digital modulation. This feature allows preserving the topological organization of the codebook along the transmission chain while keeping a reduced complexity system. This method applied on grey level images can be used for colour images as well. Several tests of transmission for different images have shown the robustness of this method even for high bit error rate (BER>10−2). In order to qualify the quality of the image after transmission, we use a PSNR% (peak signal-to-noise ratio) parameter which is the value of the difference of the PSNR after compression at the transmitter and after reception at the receiver. This parameter clearly shows that 95% of the PSNR is preserved when the BER is less than 10−2.  相似文献   

17.
A hybrid coding system that uses a combination of set partition in hierarchical trees (SPIHT) and vector quantisation (VQ) for image compression is presented. Here, the wavelet coefficients of the input image are rearranged to form the wavelet trees that are composed of the corresponding wavelet coefficients from all the subbands of the same orientation. A simple tree classifier has been proposed to group wavelet trees into two classes based on the amplitude distribution. Each class of wavelet trees is encoded using an appropriate procedure, specifically either SPIHT or VQ. Experimental results show that advantages obtained by combining the superior coding performance of VQ and efficient cross-subband prediction of SPIHT are appreciable for the compression task, especially for natural images with large portions of textures. For example, the proposed hybrid coding outperforms SPIHT by 0.38 dB in PSNR at 0.5 bpp for the Bridge image, and by 0.74 dB at 0.5 bpp for the Mandrill image.  相似文献   

18.
A hybrid BTC-VQ-DCT (block truncation coding, vector quantization, and discrete cosine transform) image coding algorithm is presented. The algorithm combines the simple computation and edge preservation properties of BTC and the high fidelity and high-compression ratio of adaptive DCT with the high-compression ratio and good subjective performance of VQ, and can be implemented with significantly lower coding delays than either VQ or DCT alone. The bit-map generated by BTC is decomposed into a set of vectors which are vector quantized. Since the space of the BTC bit-map is much smaller than that of the original 8-b image, a lookup-table-based VQ encoder has been designed to `fast encode' the bit-map. Adaptive DCT coding using residual error feedback is implemented to encode the high-mean and low-mean subimages. The overall computational complexity of BTC-VQ-DCT coding is much less than either DCT and VQ, while the fidelity performance is competitive. The algorithm has strong edge-preserving ability because of the implementation of BTC as a precompress decimation. The total compression ratio is about 10:1  相似文献   

19.
A correlation-inheriting vector quantisation (VQ) image coding algorithm is presented to re-encode the output indices of VQ after analysing the correlation inheritance of the indices' neighbourhood. Simulation results indicate that this algorithm can compact the VQ index to achieve an ~21:1 compression ratio on average. In accordance with this new algorithm, an efficient very large scale integration architecture is also derived that, after synthesis, achieves a system clock rate of 110 MHz using a 0.35 mum complementary metal-oxide-semiconductor standard library  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号