首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 461 毫秒
1.
A successive approximation vector quantizer for wavelet transformimage coding   总被引:13,自引:0,他引:13  
A coding method for wavelet coefficients of images using vector quantization, called successive approximation vector quantization (SA-W-VQ) is proposed. In this method, each vector is coded by a series of vectors of decreasing magnitudes until a certain distortion level is reached. The successive approximation using vectors is analyzed, and conditions for convergence are derived. It is shown that lattice codebooks are an efficient tool for meeting these conditions without the need for very large codebooks. Regular lattices offer the extra advantage of fast encoding algorithms. In SA-W-VQ, distortion equalization of the wavelet coefficients can be achieved together with high compression ratio and precise bit-rate control. The performance of SA-W-VQ for still image coding is compared against some of the most successful image coding systems reported in the literature. The comparison shows that SA-W-VQ performs remarkably well at several bit rates and in various test images.  相似文献   

2.
This paper presents a new lossy coding scheme based on 3D wavelet transform and lattice vector quantization for volumetric medical images. The main contribution of this work is the design of a new codebook enclosing a multidimensional dead zone during the quantization step which enables to better account correlations between neighbor voxels. Furthermore, we present an efficient rate–distortion model to simplify the bit allocation procedure for our intra-band scheme. Our algorithm has been evaluated on several CT- and MR-image volumes. At high compression ratios, we show that it can outperform the best existing methods in terms of rate–distortion trade-off. In addition, our method better preserves details and produces thus reconstructed images less blurred than the well-known 3D SPIHT algorithm which stands as a reference.  相似文献   

3.
基于格的灰度级水印技术   总被引:1,自引:0,他引:1  
李晓强 《电子学报》2006,34(B12):2438-2442
提出一个新的灰度级水印算法.首先,使用量化技术对灰度级水印进行预处理,实现灰度级水印的数据压缩;然后,对原始图像进行小波分解,在小波域中使用格矢量量化技术构造水印的嵌人和提取算法.提取水印不需要原图像.为增强水印的安全性,使用混沌序列作为密钥对水印序列进行调制.实验结果表明,与同类算法相比该算法在获得较好感知质量含水印图像的同时提高了水印的鲁棒性.  相似文献   

4.
This paper evaluates the performance of an image compression system based on wavelet-based subband decomposition and vector quantization. The images are decomposed using wavelet filters into a set of subbands with different resolutions corresponding to different frequency bands. The resulting subbands are vector quantized using the Linde-Buzo-Gray (1980) algorithm and various fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive neural network through an unsupervised learning process. The quality of the multiresolution codebooks designed by these algorithms is measured on the reconstructed images belonging to the training set used for multiresolution codebook design and the reconstructed images from a testing set.  相似文献   

5.
小波变换以其良好的空间-频率局部特性,在图像编码标准JPEG2000和MPEG4中占据了重要位置.本文选用正交小波基对图像做小波变换,然后重新组织小波系数成小波块,最后提出了一个构造小波块量化矩阵以产生最优比特分配的算法.本算法用一种新的方式统计小波系数分布,并结合人体视觉系统的特点,采用动态策略在很大的比特率范围内产生最优的小波块量化矩阵.  相似文献   

6.
We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization.  相似文献   

7.
This paper presents a novel image coding scheme using M-channel linear phase perfect reconstruction filterbanks (LPPRFBs) in the embedded zerotree wavelet (EZW) framework introduced by Shapiro (1993). The innovation here is to replace the EZWs dyadic wavelet transform by M-channel uniform-band maximally decimated LPPRFBs, which offer finer frequency spectrum partitioning and higher energy compaction. The transform stage can now be implemented as a block transform which supports parallel processing and facilitates region-of-interest coding/decoding. For hardware implementation, the transform boasts efficient lattice structures, which employ a minimal number of delay elements and are robust under the quantization of lattice coefficients. The resulting compression algorithm also retains all the attractive properties of the EZW coder and its variations such as progressive image transmission, embedded quantization, exact bit rate control, and idempotency. Despite its simplicity, our new coder outperforms some of the best image coders published previously in the literature, for almost all test images (especially natural, hard-to-code ones) at almost all bit rates.  相似文献   

8.
This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented.  相似文献   

9.
A number of algorithms have been developed for lossy image compression. Among the existing techniques, a block-based scheme is widely used because of its tractability even for complex coding schemes. Fixed block-size coding, which is the simplest implementation of block-based schemes, suffers from the nonstationary nature of images. The formidable blocking artifacts always appear at low bit rates. To suppress this degradation, variable block-size coding is utilized. However, the allowable range of sizes is still limited because of complexity issues. By adaptively representing each region by its feature, input to the coder is transformed to fixed-size (8×8) blocks. This capability allows lower cross-correlation among the regions. Input feature is also classified into the proper group so that vector quantization can maximize its strength compatible with human visual sensitivity. Bit rate based on this algorithm is minimized with the new bit allocation algorithm. Simulation results show a similar performance in terms of PSNR over conventional discrete cosine transform in conjunction with classified vector quantization.  相似文献   

10.
Embedded image coding using zerotrees of wavelet coefficients   总被引:20,自引:0,他引:20  
The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. The embedded code represents a sequence of binary decisions that distinguish an image from the “null” image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images. Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source. The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding  相似文献   

11.
基于自适应小波变换的嵌入图像压缩算法   总被引:3,自引:1,他引:2  
针对遥感、指纹、地震资料等图像纹理复杂丰富、局部相关性较弱等特点,文章通过实施自适应小波变换、合理确定系数扫描次序、分类量化小波系数等措施,提出了一种高效的图像压缩编码算法.仿真结果表明,相同压缩比下,本文算法的图像复原质量明显优于SPIHT算法(特别是对于纹理图像,如标准图像Barbara).  相似文献   

12.
该文提出了基于层次模型的重要性编码算法,该算法在不改变JPEG2000上下文模型前提下按照重要系数的空间聚集区域进行层次邻域编码。实验结果表明:新算法输出的码流的自相关性较JEG2000以及Hilbert曲线输出的码流的自相关性好;新算法的平均码率较JPEG2000的条带扫描以及Hilbert曲线扫描的平均码率分别提高了1.06%,0.57%;而且,新算法的平均码率较JPEG2000的上下文量化优化算法获得的平均码率提高了1.08%。  相似文献   

13.
Pyramidal lattice vector quantization for multiscale image coding   总被引:12,自引:0,他引:12  
Introduces a new image coding scheme using lattice vector quantization. The proposed method involves two steps: biorthogonal wavelet transform of the image, and lattice vector quantization of wavelet coefficients. In order to obtain a compromise between minimum distortion and bit rate, we must truncate and scale the lattice suitably. To meet this goal, we need to know how many lattice points lie within the truncated area. We investigate the case of Laplacian sources where surfaces of equal probability are spheres for the L(1) metric (pyramids) for arbitrary lattices. We give explicit generating functions for the codebook sizes for the most useful lattices like Z(n), D(n), E(s), wedge(16).  相似文献   

14.
零树框架下整数小波图像编码的改进   总被引:2,自引:0,他引:2  
整数小波变换(Integer Wavelet Transform)有许多优点,但是图象经整数小波变换(IWT)后,能量集中性较第一代小波变换差很多,不利于嵌入式零树编码(Embedded Zerotree Wavelet Encoding)。因此本文提出一种新算法,从两方面加以改进。首先,采用“整数平方量化阈值选取算法”,根据整数小波变换后各子带系数幅值的动态变化较小,小波图像能量较一般小波差的特点,选取从1开始的正整数平方作为量化闽值的同时引入可调节的量化阈值系统,根据图像中不同区域的重要性选取与之相应的量化阈值,从而增加了零树的数量;其次,提出基于索引表和游程编码的小波零树编码的新思路,简化了编码与解码的过程。实验表明,本文算法充分的将整数小波变换与零树编码结合在一起,改善了压缩质量,提高了压缩效率。  相似文献   

15.
该文提出了一种基于双正交小波变换(BWT)和模糊矢量量化(FVQ)的极低比特率图像编码算法。该算法通过构造符合图像小波变换系数特征的跨频带矢量,充分利用了不同频带小波系数之间的相关性,有效地提高了图像的编码效率和重构质量。该算法采用非线性插补矢量量化(NLIVQ)的思想,从大维数矢量中提取小维数的特征矢量,并提出了一种新的模糊矢量量化方法一渐进构造模糊聚类(PCFC)算法用于特征矢量的量化,从而大大提高了矢量量化的速度和码书质量。实验结果证明,该算法在比特率为0.172bpp的条件下仍能获得PSNR>30dB的高质量重构图像。  相似文献   

16.
The paper describes a lossy image codec that uses a noncausal (or bilateral) prediction model coupled with vector quantization. The noncausal prediction model is an alternative to the causal (or unilateral) model that is commonly used in differential pulse code modulation (DPCM) and other codecs with a predictive component. We show how to obtain a recursive implementation of the noncausal image model without compromising its optimality and how to apply this in coding in much the same way as a causal predictor. We report experimental compression results that demonstrate the superiority of using a noncausal model based predictor over using traditional causal predictors. The codec is shown to produce high-quality compressed images at low bit rates such as 0.375 b/pixel. This quality is contrasted with the degraded images that are produced at the same bit rates by codecs using causal predictors or standard discrete cosine transform/Joint Photographic Experts Group-based (DCT/JPEG-based) algorithms.  相似文献   

17.
提出了一种可用于视频解码器中的参考帧压缩算法.该算法利用小波变换和标量量化以及比特分配等技术.实现了对参考帧固定压缩率的压缩;结构简单易于实现,并因其固定压缩率而可方便地随机访问压缩的数据.实验证明.该算法在减少存储参考帧的存储器成本的情况下仍能保持优良的图像质量.  相似文献   

18.
To use wavelet packets for lossy data compression, the following issues must be addressed: quantization of the wavelet subbands, allocation of bits to each subband, and best-basis selection. We present an algorithm for wavelet packets that systematically identifies all bit allocations/best-basis selections on the lower convex hull of the rate-distortion curve. We demonstrate the algorithm on tree-structured vector quantizers used to code image subbands from the wavelet packet decomposition.  相似文献   

19.
To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.   相似文献   

20.
基于格点量化的小波图像编码技术   总被引:2,自引:0,他引:2  
将小波变换应用于图像的压缩编码已显示出其诱人的前景,本文首次半格点量化与游程编码方法的相结合,采用复合熵编码来处理非零格点,采用游程来编码零格点,得到了很高的编码效率,该方法的另一个优点是可以灵活地控制编码器的输出速率,同时,由于格点量化有其快速算法,因此整个编码器的运算杂义很低,其综合性能不仅优于DCT,也优于通常格点量化算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号