首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 32 毫秒
1.
To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.   相似文献   

2.
遥感图像自适应分层量化的快速DCT压缩法   总被引:1,自引:0,他引:1  
依据遥感图像的频谱特性,提出一种自适应分层量化的快速DCT图像压缩算法,在对原始图像快速DCT之后,根据图像频谱特性自适应修正JPEG量化表,再用新量化表分层量化DCT系数。真实遥感图像压缩实验表明,在同等压缩比下,提出的方法比标准JPEG方法速度快,且峰值信噪比增加1~2dB,并能实现嵌入式码流图像压缩。  相似文献   

3.
An efficient algorithm for optimizing DCT quantization   总被引:1,自引:0,他引:1  
We describe the RD-OPT algorithm for DCT quantization optimization, which can be used as an efficient tool for near-optimal rate control in DCT-based compression techniques, such as JPEG and MPEG. RD-OPT measures DCT coefficient statistics for the given image data to construct rate/distortion-specific quantization tables with nearly optimal tradeoffs.  相似文献   

4.
Image sharpening in the JPEG domain   总被引:3,自引:0,他引:3  
We present a new technique for sharpening compressed images in the discrete-cosine-transform domain. For images compressed using the JPEG standard, image sharpening is achieved by suitably scaling each element of the encoding quantization table to enhance the high-frequency characteristics of the image. The modified version of the encoding table is then transmitted in lieu of the original. Experimental results with scanned images show improved text and image quality with no additional computation cost and without affecting compressibility.  相似文献   

5.
On spatial quantization of color images   总被引:2,自引:0,他引:2  
Image quantization and digital halftoning, two fundamental image processing problems, are generally performed sequentially and, in most cases, independent of each other. Color reduction with a pixel-wise defined distortion measure and the halftoning process with its local averaging neighborhood typically optimize different quality criteria or, frequently, follow a heuristic approach without reference to any quantitative quality measure. In this paper, we propose a new model to simultaneously quantize and halftone color images. The method is based on a rigorous cost-function approach which optimizes a quality criterion derived from a simplified model of human perception. It incorporates spatial and contextual information into the quantization and thus overcomes the artificial separation of quantization and halftoning. Optimization is performed by an efficient multiscale procedure which substantially alleviates the computational burden. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real-world images showing a significant image quality improvement compared to standard color reduction approaches. Applying the developed cost function, we also suggest a new distortion measure for evaluating the overall quality of color reduction schemes.  相似文献   

6.
自适应抖动调制图像水印算法   总被引:3,自引:0,他引:3  
量化步长是影响量化水印算法性能的最关键因素之一。该文根据JPEG量化表自适应地选择抖动调制中的量化步长,提出了一种新的自适应量化水印算法;并将JPEG量化表和Watson感知模型相结合提出了另一种新的自适应量化水印算法。实验结果表明该文提出的两种算法对噪声干扰和常见的图像处理具有较好的鲁棒性。此外,研究还表明JPEG量化表和Watson模型均可以用于缓解量化水印算法对噪声的敏感程度,而结合了两者的水印算法可以获得最佳性能。  相似文献   

7.
为了提高图像的压缩比和压缩质量,结合人眼对比度敏感视觉特性和图像变换域频谱特征,该文提出一种自适应量化表的构建方法。并将该表代替JPEG中的量化表,且按照JPEG的编码算法对3幅不同的彩色图像进行了压缩仿真实验验证,同时与JPEG压缩作对比分析。实验结果表明:与JPEG压缩方法相比,在相同的压缩比下,采用自适应量化压缩后,3幅解压彩色图像的SSIM和PSNR值分别平均提高了1.67%和4.96%。表明该文提出的结合人眼视觉特性的自适应量化是一种较好的、有实用价值的量化方法。  相似文献   

8.
Error diffusion halftoning is a popular method of producing frequency modulated (FM) halftones for printing and display. FM halftoning fixes the dot size (e.g., to one pixel in conventional error diffusion) and varies the dot frequency according to the intensity of the original grayscale image. We generalize error diffusion to produce FM halftones with user-controlled dot size and shape by using block quantization and block filtering. As a key application, we show how block-error diffusion may be applied to embed information in hardcopy using dot shape modulation. We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base images. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. We refer to the encoded printed version as an image barcode due to its high information capacity that differentiates it from common hardcopy watermarks. The encoding/halftoning strategy is based on a modified version of block-error diffusion. Encoder stability, image quality versus information capacity tradeoffs, and decoding issues with and without explicit knowledge of the base image are discussed.  相似文献   

9.
本文在讨论JPEG标准压缩过程的基础上,提出了自适应优化匹配量化表的JPEG图像压缩方法。该方法根据每个图像子块不同的频率特性确定不同的量化表。本文通过一系列的图像实际压缩实验,证明了该方法的有效性和可行性。  相似文献   

10.
In this paper, we propose a systematic procedure to design a quantization table based on the human visual system model for the baseline JPEG coder. By incorporating the human visual system model with a uniform quantizer, a perceptual quantization table is derived. The quantization table can be easily adapted to the specified resolution for viewing and printing. Experimental results indicate that the derived HVS-based quantization table can achieve better performance in rate-distortion sense than the JPEG default quantization table.  相似文献   

11.
Medical images are widely used in the diagnosis of diseases. These imaging modalities include computerised tomography (CT), magnetic resonance imaging (MRI), ultrasonic (US) imaging, X-radiographs, etc. However, medical images have large storage requirements when high resolution is demanded; therefore, they need to be compressed to reduce the data size so as to achieve a low bit rate for transmission or storage, while maintaining image information. The Joint Photographic Experts Group (JPEG) developed an image compression tool that is one of the most widely used products for image compression. One of the factors influencing the performance of JPEG compression is the quantisation table. The bit rate and the decoded quality are determined simultaneously by the quantisation table, and therefore, the table has a strong influence on the whole compression performance. The author aims to provide a design procedure to seek sets of better quantisation parameters to raise the compression performance to achieve a lower bit rate while preserving high decoded quality. A genetic algorithm (GA) was employed to promote higher compression performance for medical images. The goal was to develop a design procedure to find quantisation tables that contribute to better compression efficiency in terms of bit rate and decoded quality. Simulations were carried out on different kinds of medical images. Resulting experimental data demonstrate that the GA-based search procedures can generate better performance than JPEG 2000 and JPEG even though the training images have different features. Additionally, if existing published quantisation tables are put into the crossover pool in the proposed GA-based system, it can improve the performance by yielding better quantisation tables.  相似文献   

12.
童博  刘晓东  蔡兵  陈彦丽 《中国激光》2007,34(s1):342-345
提出了基于JPEG格式的激光图像扫描技术,利用JPEG图像格式的高压缩比以及通用性等特点,克服了以往激光图像扫描中采用BMP图像格式所存在的耗费大量存储空间的缺点,同时扩展了激光图像扫描的应用范围。通过对JPEG的解码,将JPEG文件格式转化为顺序存储像素信息的临时文件作为待输出的图像数据。在数据输出前,还需要对图像数据进行数字半色调处理,采用多级误差扩散算法可以使输出的图像数据保留更多的原始图像信息,使输出图像更加逼真。提出了在DSP系统下采用这种基于JPEG的激光图像扫描技术,可以更加快捷地实现解码和半色调处理,减少了成本,增加了实用性。  相似文献   

13.
Inverse error-diffusion using classified vector quantization   总被引:1,自引:0,他引:1  
This correspondence extends and modifies classified vector quantization (CVQ) to solve the problem of inverse halftoning. The proposed process consists of two phases: the encoding phase and decoding phase. The encoding procedure needs a codebook for the encoder which transforms a halftoned image to a set of codeword-indices. The decoding process also requires a different codebook for the decoder which reconstructs a gray-scale image from a set of codeword-indices. Using CVQ, the reconstructed gray-scale image is stored in compressed form and no further compression may be required. This is different from the existing algorithms, which reconstructed a halftoned image in an uncompressed form. The bit rate of encoding a reconstructed image is about 0.51 b/pixel.  相似文献   

14.
小波变换以其良好的空间-频率局部特性,在图像编码标准JPEG2000和MPEG4中占据了重要位置.本文选用正交小波基对图像做小波变换,然后重新组织小波系数成小波块,最后提出了一个构造小波块量化矩阵以产生最优比特分配的算法.本算法用一种新的方式统计小波系数分布,并结合人体视觉系统的特点,采用动态策略在很大的比特率范围内产生最优的小波块量化矩阵.  相似文献   

15.
Image compression is indispensable in medical applications where inherently large volumes of digitized images are presented. JPEG 2000 has recently been proposed as a new image compression standard. The present recommendations on the choice of JPEG 2000 encoder options were based on nontask-based metrics of image quality applied to nonmedical images. We used the performance of a model observer [non-prewhitening matched filter with an eye filter (NPWE)] in a visual detection task of varying signals [signal known exactly but variable (SKEV)] in X-ray coronary angiograms to optimize JPEG 2000 encoder options through a genetic algorithm procedure. We also obtained the performance of other model observers (Hotelling, Laguerre-Gauss Hotelling, channelized-Hotelling) and human observers to evaluate the validity of the NPWE optimized JPEG 2000 encoder settings. Compared to the default JPEG 2000 encoder settings, the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task. In addition, the performance also was improved for a more clinically realistic task where the signal varied from image to image but was not known a priori to observers [signal known statistically (SKS)]. The highest performance improvement for humans was at a high compression ratio (e.g., 30:1) which resulted in approximately a 75% improvement for both the SKEV and SKS tasks.  相似文献   

16.
Image compression systems that exploit the properties of the human visual system have been studied extensively over the past few decades. For the JPEG2000 image compression standard, all previous methods that aim to optimize perceptual quality have considered the irreversible pipeline of the standard. In this work, we propose an approach for the reversible pipeline of the JPEG2000 standard. We introduce a new methodology to measure visibility of quantization errors when reversible color and wavelet transforms are employed. Incorporation of the visibility thresholds using this methodology into a JPEG2000 encoder enables creation of scalable codestreams that can provide both near-threshold and numerically lossless representations, which is desirable in applications where restoration of original image samples is required. Most importantly, this is the first work that quantifies the bitrate penalty incurred by the reversible transforms in near-threshold image compression compared to the irreversible transforms.  相似文献   

17.
Striving to maximize baseline (Joint Photographers Expert Group-JPEG) image quality without compromising compatibility of current JPEG decoders, we develop an image-adaptive JPEG encoding algorithm that jointly optimizes quantizer selection, coefficient "thresholding", and Huffman coding within a rate-distortion (R-D) framework. Practically speaking, our algorithm unifies two previous approaches to image-adaptive JPEG encoding: R-D optimized quantizer selection and R-D optimal thresholding. Conceptually speaking, our algorithm is a logical consequence of entropy-constrained vector quantization (ECVQ) design principles in the severely constrained instance of JPEG-compatible encoding. We explore both viewpoints: the practical, to concretely derive our algorithm, and the conceptual, to justify the claim that our algorithm approaches the best performance that a JPEG encoder can achieve. This performance includes significant objective peak signal-to-noise ratio (PSNR) improvement over previous work and at high rates gives results comparable to state-of-the-art image coders. For example, coding the Lena image at 1.0 b/pixel, our JPEG encoder achieves a PSNR performance of 39.6 dB that slightly exceeds the quoted PSNR results of Shapiro's wavelet-based zero-tree coder. Using a visually based distortion metric, we can achieve noticeable subjective improvement as well. Furthermore, our algorithm may be applied to other systems that use run-length encoding, including intraframe MPEG and subband or wavelet coding.  相似文献   

18.
A real-time, low-power video encoder design for pyramid vector quantization (PVQ) has been presented. The quantizer is estimated to dissipate only 2.1 mW for real-time video compression of images of 256 × 256 pixels at 30 frames per second in standard 0.8-micronCMOS technology with a 1.5 V supply. Applying this quantizer to subband decomposed images, the quantizer performs better than JPEG on average. We achieve this high level of power efficiency with image quality exceeding that of variable rate codes through algorithmic and architectural reformulation. The PVQ encoder is well-suited for wireless, portable communication applications.  相似文献   

19.
Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value.  相似文献   

20.
黄方军  万晨 《信号处理》2021,37(12):2251-2260
JPEG(Joint Photographic Experts Group,联合图像专家小组)是目前互联网上运用最为广泛的图像格式。已有的案例表明,许多篡改操作都发生在JPEG图像上,其操作基本流程是首先对JPEG文件进行解压,在空域进行篡改,篡改完成后再将篡改后的图片压缩保存为JPEG格式,这样篡改后的图片就可能会被两次甚至多次JPEG压缩。因此,JPEG图像的重压缩检测可以作为判断图像是否经过篡改的重要依据,对JPEG图像进行分析和取证具有非常重要的意义。本文主要从JPEG重压缩过程中量化表保持不变和量化表不一致这两个方面,对近年来JPEG重压缩检测领域的文献进行了一个回顾,介绍了该领域一些代表性的方法。最后我们还分析了JPEG重压缩领域存在的问题,并对未来的发展方向进行了展望。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号