首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Error-resilient pyramid vector quantization for image compression   总被引:1,自引:0,他引:1  
Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience.  相似文献   

2.
A real-time, low-power video encoder design for pyramid vector quantization (PVQ) has been presented. The quantizer is estimated to dissipate only 2.1 mW for real-time video compression of images of 256 × 256 pixels at 30 frames per second in standard 0.8-micronCMOS technology with a 1.5 V supply. Applying this quantizer to subband decomposed images, the quantizer performs better than JPEG on average. We achieve this high level of power efficiency with image quality exceeding that of variable rate codes through algorithmic and architectural reformulation. The PVQ encoder is well-suited for wireless, portable communication applications.  相似文献   

3.
为了提高图像的压缩比和压缩质量,结合人眼对比度敏感视觉特性和图像变换域频谱特征,该文提出一种自适应量化表的构建方法。并将该表代替JPEG中的量化表,且按照JPEG的编码算法对3幅不同的彩色图像进行了压缩仿真实验验证,同时与JPEG压缩作对比分析。实验结果表明:与JPEG压缩方法相比,在相同的压缩比下,采用自适应量化压缩后,3幅解压彩色图像的SSIM和PSNR值分别平均提高了1.67%和4.96%。表明该文提出的结合人眼视觉特性的自适应量化是一种较好的、有实用价值的量化方法。  相似文献   

4.
The JPEG image compression standard is very sensitive to errors. Even though it contains error resilience features, it cannot easily cope with induced errors from computer soft faults prevalent in remote-sensing applications. Hence, new fault tolerance detection methods are developed to sense the soft errors in major parts of the system while also protecting data across the boundaries where data flow from one subsystem to the other. The design goal is to guarantee no compressed or decompressed data contain computer-induced errors without detection. Detection methods are expressed at the algorithm level so that a wide range of hardware and software implementation techniques can be covered by the fault tolerance procedures while still maintaining the JPEG output format. The major subsystems to be addressed are the discrete cosine transform, quantizer, entropy coding, and packet assembly. Each error detection method is determined by the data representations within the subsystem or across the boundaries. They vary from real number parities in the DCT to bit-level residue codes in the quantizer, cyclic redundancy check parities for entropy coding, and packet assembly. The simulation results verify detection performances even across boundaries while also examining roundoff noise effects in detecting computer-induced errors in processing steps.  相似文献   

5.
黄方军  万晨 《信号处理》2021,37(12):2251-2260
JPEG(Joint Photographic Experts Group,联合图像专家小组)是目前互联网上运用最为广泛的图像格式。已有的案例表明,许多篡改操作都发生在JPEG图像上,其操作基本流程是首先对JPEG文件进行解压,在空域进行篡改,篡改完成后再将篡改后的图片压缩保存为JPEG格式,这样篡改后的图片就可能会被两次甚至多次JPEG压缩。因此,JPEG图像的重压缩检测可以作为判断图像是否经过篡改的重要依据,对JPEG图像进行分析和取证具有非常重要的意义。本文主要从JPEG重压缩过程中量化表保持不变和量化表不一致这两个方面,对近年来JPEG重压缩检测领域的文献进行了一个回顾,介绍了该领域一些代表性的方法。最后我们还分析了JPEG重压缩领域存在的问题,并对未来的发展方向进行了展望。   相似文献   

6.
To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.   相似文献   

7.
In this paper, we introduce a novel technique for adaptive scalar quantization. Adaptivity is useful in applications, including image compression, where the statistics of the source are either not known a priori or will change over time. Our algorithm uses previously quantized samples to estimate the distribution of the source, and does not require that side information be sent in order to adapt to changing source statistics. Our quantization scheme is thus backward adaptive. We propose that an adaptive quantizer can be separated into two building blocks, namely, model estimation and quantizer design. The model estimation produces an estimate of the changing source probability density function, which is then used to redesign the quantizer using standard techniques. We introduce nonparametric estimation techniques that only assume smoothness of the input distribution. We discuss the various sources of error in our estimation and argue that, for a wide class of sources with a smooth probability density function (pdf), we provide a good approximation to a "universal" quantizer, with the approximation becoming better as the rate increases. We study the performance of our scheme and show how the loss due to adaptivity is minimal in typical scenarios. In particular, we provide examples and show how our technique can achieve signal-to-noise ratios within 0.05 dB of the optimal Lloyd-Max quantizer for a memoryless source, while achieving over 1.5 dB gain over a fixed quantizer for a bimodal source.  相似文献   

8.
Image and video compression has become an increasingly important and active area. Many techniques have been developed in this area. Any compression technique can be modeled as a three-stage process. The first stage can be generally called a signal processing stage where an image or video signal is converted into a different domain. Usually, there is no or little loss of information in this stage. The second stage is quantization where loss of information occurs. The third stage is lossless coding that generates the compressed bit stream. The purpose of the signal processing stage is to convert an image or video signal into such a form that quantization can achieve better performance than without the signal processing stage. Because the quantization stage is the place where most of compression is achieved and loss of information occurs, it is naturally the central stage of any compression technique. Since scalar quantization or vector quantization may be used in the second stage, the operation in the first stage should be scalar-based or vector-based respectively in order to match the second stage so that the compression performance can be optimized. In this paper, we summarize the most recent research results on vector-based signal processing and quantization techniques that have shown high compression performance  相似文献   

9.
针对JPEG的中低码率压缩图像即高压缩率图像存在较严重的块效应以及量化噪声,提出了一种对JPEG标准压缩图像进行优化的重建-采样方法.该方法对JPEG压缩图像采用三维块匹配算法(BM3D)进行去噪,去除图像中存在的块效应和量化噪声,进而提高超分辨率重建的映射准确性,再使用外部库对去噪后图像进行基于稀疏表示的超分辨率重建,补充一定的高频信息,最后对重建后的高分辨率图进行双三次下采样,得到与原始图像大小一致的图像作为最终优化图像.实验结果表明,该方法在中低码率情况下能够有效地提高JPEG压缩图像的质量,对高码率压缩图像也有一定效果.  相似文献   

10.
该文提出了一种基于JPEG序列的图像重建方法。该方法在已有的单帧图像复原技术的基础之上,依据超分辨率重建的思想,将凸集投影(POCS)理论与迭代反投影(IBP)算法相结合,在频域内降低量化误差,修复离散余弦系数。此外,它还利用了最大后验概率(MAP)估计以及相应优化算法的特点,在去除高斯噪声的同时,保护边缘和细节信息。实验结果表明,该方法一方面能够抑制高比率压缩所造成的块效应和振铃效应,另一方面能较好地恢复图像的细节部分,有效地提高图像的清晰度。  相似文献   

11.
一种估计JPEG双重压缩原始量化步长的新方法   总被引:2,自引:1,他引:1  
该文提出了一种双重压缩后JPEG图像的原始量化步长的估计方法。该方法根据两次量化步长之间的大小关系分3种情况进行讨论。当原始量化步长大于第2次量化步长,提出了直接利用直方图计算的新方法;为解决原始量化步长是第2次量化步长因子和傅里叶频谱分析中的多值问题,提出了采用0.98缩放来近似未压缩图像的方法。本文方法能给出第二次量化步长为一次量化步长倍数时的估计,并利用频谱分析的结果降低了计算的复杂度,实验结果表明本文方法有较高的估计准确度。  相似文献   

12.
Down-scaling for better transform compression   总被引:1,自引:0,他引:1  
The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It is known that, at low bit rates, a down-sampled image, when JPEG compressed, visually beats the high resolution image compressed via JPEG to be represented by the same number of bits. Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process. We give an analytical model and a numerical analysis of the down-sampling, compression and up-sampling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide a good estimate for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down-sampling factor necessary to get the best possible recovered image in terms of PSNR.  相似文献   

13.
遥感图像自适应分层量化的快速DCT压缩法   总被引:1,自引:0,他引:1  
依据遥感图像的频谱特性,提出一种自适应分层量化的快速DCT图像压缩算法,在对原始图像快速DCT之后,根据图像频谱特性自适应修正JPEG量化表,再用新量化表分层量化DCT系数。真实遥感图像压缩实验表明,在同等压缩比下,提出的方法比标准JPEG方法速度快,且峰值信噪比增加1~2dB,并能实现嵌入式码流图像压缩。  相似文献   

14.
In the field of economy, there are more and more electronic scanning cash images, which need to be compressed in a higher compression ratio. This paper proposes a specific compression algorithm for cash images. First, according to cash image characteristics and standard JPEG (joint photographic experts group) compression, image re-ordering techniques are analyzed, and the method of modifying some blocks into single color blocks is adopted. Then, a suitable quantization table for cash images is obtained. Experimental results show that the method is effective.  相似文献   

15.
Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and low-level classification, it is not surprising that there are many similar methods for their design. Because some of these methods are useful for designing vector quantizers, it seems natural that vector quantization (VQ) is explored for the combined goal. We investigate several VQ-based algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks. These algorithms are investigated with both full search and tree-structured codes. We emphasize a nonparametric technique that minimizes both error measures simultaneously by incorporating a Bayes risk component into the distortion measure used for the design and encoding. We introduce a tree-structured posterior estimator to produce the class posterior probabilities required for the Bayes risk computation in this design. For two different image sources, we demonstrate that this system provides superior classification while maintaining compression close or superior to that of several other VQ-based designs, including Kohonen's (1992) "learning vector quantizer" and a sequential quantizer/classifier design.  相似文献   

16.
High dynamic range (HDR) image requires a higher number of bits per color channel than traditional images. This brings about problems to storage and transmission. Color space quantization has been extensively studied to achieve bit encodings for each pixel and still yields prohibitively large files. This paper explores the possibility of further compressing HDR images quantized in color space. The compression schemes presented in this paper extends existing lossless image compression standards to encode HDR images. They separate HDR images in their bit encoding formats into images in grayscale or RGB domain, which can be directly compressed by existing lossless compression standards such as JPEG, JPEG 2000 and JPEG-LS. The efficacy of the compression schemes is illustrated by presenting extensive results of encoding a series of synthetic and natural HDR images. Significant bit savings of up to 53% are observed when comparing with original HDR formats and HD Photo compressed version. This is beneficial to the storage and transmission of HDR images.  相似文献   

17.
Adaptive threshold modulation for error diffusion halftoning   总被引:5,自引:0,他引:5  
Grayscale digital image halftoning quantizes each pixel to one bit. In error diffusion halftoning, the quantization error at each pixel is filtered and fed back to the input in order to diffuse the quantization error among the neighboring grayscale pixels. Error diffusion introduces nonlinear distortion (directional artifacts), linear distortion (sharpening), and additive noise. Threshold modulation, which alters the quantizer input, has been previously used to reduce either directional artifacts or linear distortion. This paper presents an adaptive threshold modulation framework to improve halftone quality by optimizing error diffusion parameters in the least squares sense. The framework models the quantizer implicitly, so a wide variety of quantizers may be used. Based on the framework, we derive adaptive algorithms to optimize 1) edge enhancement halftoning and 2) green noise halftoning. In edge enhancement halftoning, we minimize linear distortion by controlling the sharpening control parameter. We may also break up directional artifacts by replacing the thresholding quantizer with a deterministic bit flipping (DBF) quantizer. For green noise halftoning, we optimize the hysteresis coefficients.  相似文献   

18.
In this paper, the effects of quantization noise feedback on the entropy of Laplacian pyramids are investigated. This technique makes it possible for the maximum absolute reconstruction error to be easily and strongly upper-bounded (near-lossless coding), and therefore, allows reversible compression. The entropy-minimizing optimum quantizer is obtained by modeling the first-order distributions of the differential signals as Laplacian densities, and by deriving a model for the equivalent memoryless entropy. A novel approach, based on an enhanced Laplacian pyramid, is proposed for the compression, either lossless or lossy, of gray-scale images. Major details are prioritized through a content-driven decision rule embedded in a uniform threshold quantizer with noise feedback. Lossless coding shows improvements over reversible Joint Photographers Expert Group (JPEG) and the reduced-difference pyramid schemes, while lossy coding outperforms JPEG, with a significant peak signal-to-noise ratio (PSNR) gain. Also, subjective quality is higher even at very low bit rates, due to the absence of the annoying impairments typical of JPEG. Moreover, image versions having resolution and SNR that are both progressively increasing are made available at the receiving end from the earliest retrieval stage on, as intermediate steps of the decoding procedure, without any additional cost.  相似文献   

19.
结构信息最优的静止图像压缩算法研究   总被引:1,自引:1,他引:0  
JPEG2000是基于小波变换的新一代静止图像压缩标准,与以往的压缩标准相比,其具有很多优点。但是JPEG2000以MSE作为图像失真评价标准,而MSE不能很好的符合人眼主观评分,进而很大地影响了JPEG2000的压缩性能。该文在JPEG2000标准框架下,提出了以结构相似度作为失真评价标准的静止图像压缩算法(SJPEG2000)。该算法以系数对图像结构信息贡献量的大小作为准则来截取码流,使压缩后的图像尽量保存原图像的结构信息。实验结果表明,该算法压缩得到的图像很好地保留了图像结构信息,压缩图像的主观质量得到提高,结构相似度值较原JPEG2000也有一定提高。  相似文献   

20.
Due to the wide diffusion of JPEG coding standard, the image forensic community has devoted significant attention to the development of double JPEG (DJPEG) compression detectors through the years. The ability of detecting whether an image has been compressed twice provides paramount information toward image authenticity assessment. Given the trend recently gained by convolutional neural networks (CNN) in many computer vision tasks, in this paper we propose to use CNNs for aligned and non-aligned double JPEG compression detection. In particular, we explore the capability of CNNs to capture DJPEG artifacts directly from images. Results show that the proposed CNN-based detectors achieve good performance even with small size images (i.e., 64 × 64), outperforming state-of-the-art solutions, especially in the non-aligned case. Besides, good results are also achieved in the commonly-recognized challenging case in which the first quality factor is larger than the second one.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号