首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The VQ reversible data embedding technology allows an original VQ coding to be completely restored after the extraction of embedded data. In this paper, we propose a new reversible scheme based on locally adaptive coding for VQ-compressed images. The fractal Hilbert curve is applied to replace the traditional trace of processing the VQ index table. The VQ index table is pre-processed to create a fractal Hilbert curve. Following the curve to process the VQ index table can get better compression rates in the data embedding procedure. Besides, compared to Chang et al.’s scheme, which compressed the inputted VQ index value only when the to-be-embedded bit b is 0, our method performs compressing operations in both cases that the to-be-embedded bits b are 0 and 1. The experimental results show that the proposed method has the best compression rate and the highest embedding capacity compared with other reversible VQ embedding methods.  相似文献   

2.
Down-scaling for better transform compression   总被引:1,自引:0,他引:1  
The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It is known that, at low bit rates, a down-sampled image, when JPEG compressed, visually beats the high resolution image compressed via JPEG to be represented by the same number of bits. Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process. We give an analytical model and a numerical analysis of the down-sampling, compression and up-sampling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide a good estimate for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down-sampling factor necessary to get the best possible recovered image in terms of PSNR.  相似文献   

3.
Reversible data hiding is a technique that not only protects the hidden secrets but also recovers the cover media without any distortion after the secret data have been extracted. In this paper, a new reversible data hiding technique for VQ indices which are compressed streams based on the mapping function and histogram analysis of transformed VQ indices is introduced to enhance the performance of some earlier reversible data hiding schemes that are based on VQ indices. As a result, the proposed scheme achieves high embedding capacity and data compression simultaneously. Moreover, the original VQ-compressed image can be perfectly reconstructed after secret data extraction. To estimate the performance of the proposed scheme, variety of test images are used in the experimental testing. As can be seen in the experimental result, our scheme is superior to some previous schemes in term of compression rate and embedding rate while maintaining the reversibility.  相似文献   

4.
A new method of digital image compression called binary tree overlapping (BTO) is described. With this method, an image is divided into bit planes that are transformed into a binary tree representation in which initial identical portions of different lines in the bit planes are transmitted as one path in the tree. To increase compression, distortion can be introduced by considering two lines which differ in fewer that a preset number of bit positions to be identical. A time efficient method of implementing BTO based on a communication technique called content-induced transaction overlap is outlined. With this technique, only simple, logical operations are needed and the compression time is proportional to the number of bits in the compressed image. Simulation studies indicate that BTO produces good quality images with a compression ratio of about three. In terms of implementation and compression efficiencies, BTO is comparable to first-order DPCM  相似文献   

5.
A novel approach for near-lossless compression of Color Filtering Array (CFA) data in wireless endoscopy capsule is proposed in this paper. The compression method is based on pre-processing and vector quantization. First, the CFA raw data are low pass filtered and rearranged during pre-processing. Then, pairs of pixels are vector quantized into macros of 9 bits by applying block partition and index mapping in succession. These macros are entropy compressed by Joint Photographic Experts Group-Lossless Standard (JPEG-LS) finally. The complex step of codeword searching in Vector Quantization (VQ) is avoided by a predefined partition rule, which is suitable for hardware implementation. By control of the pre-processor and VQ scheme, either high quality compression under unfiltered case or high ratio compression under filtered case can be realized, with the average Peak Signal-to-Noise Ratio (PSNR) more than 43dB and 37dB respectively. Compared with the state-of-the-art method and the previously proposed method, our compression approach outperforms in compression performance as well as in flexibility.  相似文献   

6.
Integer wavelet transform for embedded lossy to lossless imagecompression   总被引:15,自引:0,他引:15  
The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.  相似文献   

7.
基于颜色色差的彩色图像压缩技术研究   总被引:1,自引:1,他引:1  
姚军财 《液晶与显示》2012,27(3):391-395
根据图像DCT变换域系数特征和结合人眼辨别颜色色差的阈值,提出了一种基于颜色色差的彩色图像压缩技术方案。方案首先将彩色图像转化到与设备无关的CIELAB颜色空间,并进行DCT变换和色差计算;再通过人眼辨别颜色色差的阈值进行有选择的量化,并通过Huffman算法进行编码、压缩;最后利用其逆过程进行解压。通过仿真实验,结果表明,重建的彩色解压缩图像与源图像几乎一样,各分量亮度图的解压缩图像的PSNR均超过30dB,人眼几乎不能分辨其差异;衡量编码质量优劣的5个参数值均达到较好效果,且在保证解压缩图像质量的情况下,彩色图像的压缩比能够达到107.167 8,完全可以满足彩色图像压缩的需求。结果表明,提出的基于颜色色差的图像压缩技术方案是一种可行的、较好的彩色图像压缩技术。  相似文献   

8.
Data compression and decompression have been widely applied in modern communication and data transmission fields. But how to decompress corrupted lossless compressed files is still a challenge. This paper presents an effective method to decompress corrupted LZSS files. It is achieved by utilizing source prior information and heuristic method. In this paper, we propose to use compression coding rules and grammar rules as the source prior information. Based on the prior information, we establish a mathematical model to detect error bits and estimate the rough range of the error bits. As for error correction, a heuristic method is developed to determine the accurate positions of error bits and correct the errors. The experimental results demonstrate that the proposed FTD method can achieve a correction rate of 96.45% for corrupted LZSS files when successfully decompressed. More importantly, the proposed method is a general model that can be applied to decompress various types of lossless compressed files of which the original files are natural language texts.  相似文献   

9.
Achieving a high embedding capacity and low compression rate with a reversible data hiding method in the vector quantization (VQ) compressed domain is a technically challenging problem. This paper proposes a novel reversible steganographic scheme for VQ compressed images based on a locally adaptive data compression method. The proposed method embeds n secret bits into one VQ index of an index table in Hilbert-curve scan order. The experimental results show that the proposed method can achieve the different average embedding rates of 0.99, 1.68, 2.28, and 3.04 bit per index (bpi) and average compression rates of 0.45, 0.46, 0.5, and 0.56 bit per pixel (bpp) for n = 1, 2, 3, and 4, respectively. These results indicate that the proposed scheme is superior to Chang et al.’s scheme 1 [19], Yang and Lin’s scheme [21], and Chang et al.’s scheme 2 [24].  相似文献   

10.
This correspondence discusses a progressive vector quantization (VQ) compression approach, which decomposes image data into a number of levels using full-search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the advanced very high resolution radiometer (AVHRR) and other earth-observation image data, and investigate the tradeoffs in selecting the number of decomposition levels and codebook training method.  相似文献   

11.
黄玲 《信息技术》2004,28(6):1-3,43
分析了遥感图像的统计特性,提出了适合遥感图像压缩的矢量量化与小波变换相结合的压缩方法。该方法将遥感图像小波变换后高频子图划分为一定大小的的像块,对局部相关性较强、灰度变化较小的像块进行高倍压缩;对局部相关性较小、灰度变化较大的像块进行高保真压缩。实验表明,本方法具有良好的压缩性能,适用于遥感图像的压缩。  相似文献   

12.
数据压缩和解压缩已广泛应用于现代通信和数据传输领域。但是如何解压缩损坏的无损压缩文件仍然是一个挑战。针对在通用编码领域广泛使用的无损数据压缩算法,该文提出一种能够修复误码并解压还原损坏的LZSS文件的有效方法,并给出了理论依据。该方法通过利用编码器留下的残留冗余携带校验信息,在不损失任何压缩性能的情况下,能够修复LZSS压缩数据中的错误。所提方法不需要增加额外比特,也不改变编码规则和数据格式,所以与标准算法完全兼容。即采用具有错误修复能力的LZSS方案压缩的数据,仍然可以通过标准LZSS解码器进行解压。实验结果验证了所提算法的有效性和实用性。  相似文献   

13.
Classified Vector Quantization of Images   总被引:1,自引:0,他引:1  
Vector quantization (VQ) provides many attractive features for image coding with high compression ratios. However, initial studies of image coding with VQ have revealed several difficulties, most notably edge degradation and high computational complexity. We address these two problems and propose a new coding method, classified vector quantization (CVQ), which is based on a composite source model. Blocks with distinct perceptual features, such as edges, are generated from different subsources, i.e., belong to different classes. In CVQ, a classifier determines the class for each block, and the block is then coded with a vector quantizer designed specifically for that class. We obtain better perceptual quality with significantly lower complexity with CVQ when compared to ordinary VQ. We demonstrate with CVQ visual quality which is comparable to that produced by existing coders of similar complexity, for rates in the range 0.6-1.0 bits/pixel.  相似文献   

14.
在信息隐藏通信时,对JPEG-2000小波多级分解并在嵌入块编码中根据不同级的LL-HH分组系数引入自适应量化索引调制(AQIM),同时结合信道编码技术可以有效地实现指纹节点数据的信息隐藏和安全传输技术.通过实验可知,低码率时,与QIM隐藏技术相比,本系统有效地提高了恢复指纹图像的PSNR,同时降低了系统的BER,表明该系统恢复的图像具有更好的感官质量和更强的鲁棒性.  相似文献   

15.
With the bandwidth restriction in airborne Synthetic Aperture Radar (SAR)-based human-in-the-loop applications, the acquired SAR images should be compressed with loss to overcome the conflict of image quantity and the response time. In this letter a framework of SAR image compression is described. The SAR image is decomposed into two components, namely structural and textural components. The target region mask is used to retain the important target information by allocating more bits during compression, while less bits is allocated for the background region. The obtained results show that the compressed image using the proposed algorithm has better visual effect under the same bit rate compared with JPEG2000 algorithm.  相似文献   

16.
Due to bandwidth and storage limitations, medical images must be compressed before transmission and storage. However, the compression reduces the image fidelity, especially when the images are compressed at low bit rates. The reconstructed images suffer from blocking artifacts and the image quality is severely degraded under high compression ratios. In this paper, we present a strategy to increase the compression ratio with low computational burden and excellent decoded quality. We regard the discrete cosine transform as a bandpass filter to decompose a sub-block into equal-sized bands. After a band-gathering operation, a high similarity property among the bands is found. By utilizing the similarity property, the bit rate of compression can be greatly reduced. Meanwhile, the characteristics of the original image are not sacrificed. Thus, it can avoid the misdiagnosis of diseases. Simulations were carried out on different kinds of medical images to demonstrate that the proposed method achieves better performance when compared to other existing transform coding schemes, such as JPEG, in terms of bit rate and quality. For the case of angiogram images, the peak signal-to-noise-ratio gain is 13.5 dB at the same bit rate of 0.15 bits per pixel when compared to the JPEG compression. As for the other kinds of medical images, their benefits are not so obvious as for angiogram images; however, the gains for them are still 4-8 dB at high compression ratios. Two doctors were invited to verify the decoded image quality; the diagnoses of all the test images were correct when the compression ratios were below 20  相似文献   

17.
超声成像因非侵入式、成本低且实时性好而被广泛应用。超声系统需要大量的采集通道数据和较高的采样率来提高图像重建质量,导致成像耗时,系统复杂。压缩感知(compressed sensing, CS)算法能够在欠采样的条件下用较少的测量值重构出原始信号。因此,针对系统面临的采样率高,数据量大的问题,本文将CS理论中的DWT-IRLS算法应用在超声成像中,通过离散小波变换基(discrete wavelet transformation, DWT)对超声数据进行稀疏转换,对高低频系数进行采样测量,并使用迭代重加权最小二乘法(iterative reweighted least squares, IRLS)进行测量系数重构,最后对变换域系数进行DWT逆转换得到重建图像。通过实验分析,以50%原始数据重建图像效果逐渐趋于稳定,在均方误差和峰值信噪比方面进行对比分析,DWT-IRLS算法相比较于DWT-OMP、DWT-CoSamp和DCT-IRLS等重构算法,成像质量更高,细节特征更为明显。  相似文献   

18.
A feature correction two-stage vector quantization (FC2VQ) algorithm was previously developed to compress gray-scale photo identification (ID) pictures. This algorithm is extended to color images in this work. Three options are compared, which apply the FC2VQ algorithm in RGB, YCbCr, and Karhunen-Loeve transform (KLT) color spaces, respectively. The RGB-FC2VQ algorithm is found to yield better image quality than KLT-FC2VQ or YCbCr-FC2VQ at similar bit rates. With the RGB-FC2VQ algorithm, a 128x128 24-b color ID image (49152 bytes) can be compressed down to about 500 bytes with satisfactory quality. When the codeword indices are further compressed losslessly using a first order Huffman coder, this size is further reduced to about 450 bytes.  相似文献   

19.
基于改进的K-L变换的多光谱图像压缩算法   总被引:2,自引:2,他引:0  
融合离散小波变换(DWT,discrete wavelet tran sform)与Karhunen-Loeve变换(KLT),将图像的能量集中到少数系数上,以达到更好的 压缩效果。首先将多光谱图像的每个谱段进行快速9/72D DWT,消除多光谱图像的大部分 空间冗余;然后对所有谱段产生的小波系数进行改进的KLT,来消除光谱冗余和残存的空 间冗余;最后对所得谱段产生的小波系数进行改进的KLT,来消除光谱冗余和残存的空间冗 余;最后对所得系数进行熵编码,得到压缩码流。实验结果表明,在码率为0.25~2.0bit/ pixel范围内,平均信噪比(SNR)高于41dB,同时缩短了运 算时间,从而提升了多光谱图像压 缩算法的性能。  相似文献   

20.
汪洋  卢焕章 《信号处理》2006,22(5):630-634
本文提出了一种基于分类矢量量化器的小目标红外图像的压缩方法。首先利用图像子块的平均灰度与纹理能量这两个参数将图像划分为背景区域与感兴趣区域,然后分别对两类区域的子块进行码书设计,用相对较多的码字描述感兴趣区域,用相对较少的码字描述背景区域,这样既达到了较高的压缩比,同时又较好的保留了感兴趣区域的信息,并且编码计算量有大幅度的下降。文中对分类矢量量化器对于减小编码计算量的作用进行了理论分析。实验结果表明在相同码书尺寸的情况下本算法比直接矢量量化方法更好地保留了红外图像中的小目标信息,并且加快了编码速度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号