首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In a standard transform coding scheme of images or video, the decoder can be implemented by a table-lookup technique without the explicit use of an inverse transformation, In this new decoding method, each received code index of a transform coefficient addresses a particular codebook to fetch a component code vector that resembles the basis vector of the linear transformation. The output image is then reconstructed by summing a small number of nonzero component code vectors. With a set of well-designed codebooks, this new decoder can exploit the correlation among the quantized transform coefficients to achieve better rate-distortion performance than the conventional decoding method. An iterative algorithm for designing a set of locally optimal codebooks from a training set of images is presented. We demonstrate that this new idea can be applied to decode improved quality pictures from the bitstream generated from a standard encoding scheme of still images or video, while the complexity is low enough to justify practical implementation.  相似文献   

2.
We propose two new image compression-decompression methods that reproduce images with better visual fidelity, less blocking artifacts, and better PSNR, particularly in low bit rates, than those processed by the JPEG Baseline method at the same bit rates. The additional computational cost is small, i.e., linearly proportional to the number of pixels in an input image. The first method, the "full mode" polyharmonic local cosine transform (PHLCT), modifies the encoder and decoder parts of the JPEG Baseline method. The goal of the full mode PHLCT is to reduce the code size in the encoding part and reduce the blocking artifacts in the decoder part. The second one, the "partial mode" PHLCT (or PPHLCT for short), modifies only the decoder part, and consequently, accepts the JPEG files, yet decompresses them with higher quality with less blocking artifacts. The key idea behind these algorithms is a decomposition of each image block into a polyharmonic component and a residual. The polyharmonic component in this paper is an approximate solution to Poisson's equation with the Neumann boundary condition, which means that it is a smooth predictor of the original image block only using the image gradient information across the block boundary. Thus, the residual--obtained by removing the polyharmonic component from the original image block--has approximately zero gradient across the block boundary, which gives rise to the fast-decaying DCT coefficients, which, in turn, lead to more efficient compression-decompression algorithms for the same bit rates. We show that the polyharmonic component of each block can be estimated solely by the first column and row of the DCT coefficient matrix of that block and those of its adjacent blocks and can predict an original image data better than some of the other AC prediction methods previously proposed. Our numerical experiments objectively and subjectively demonstrate the superiority of PHLCT over the JPEG Baseline method and the improvement of the JPEG-compressed images when decompressed by PPHLCT.  相似文献   

3.
讨论一种利用图像重要性测度实现静态图像压缩标准JPEG2000编码方法。JPEG2000编码器的技术核心是离散小波变换和:EBCOT,而EBCOT是由两个编码引擎T1,T2组成的,其中T2编码主要是完成码流的组织,其方法是灵活多变的,用户可以根据特殊要求组织码流,只要码流的格式符合JPEG2000的格式即可。本文根据编码图像小波域的系数和时域的像素值计算该系数的重要性,并利用重要性组织完全符合JPEG2000格式的码流。可以利用标准JPEG2000解码器对本算法形成的码流进行解码。  相似文献   

4.
It has been well established that state-of-the-art wavelet image coders outperform block transform image coders in the rate-distortion (R-D) sense by a wide margin. Wavelet-based JPEG2000 is emerging as the new high-performance international standard for still image compression. An often asked question is: how much of the coding improvement is due to the transform and how much is due to the encoding strategy? Current block transform coders such as JPEG suffer from poor context modeling and fail to take full advantage of correlation in both space and frequency sense. This paper presents a simple, fast, and efficient adaptive block transform image coding algorithm based on a combination of prefiltering, postfiltering, and high-order space-frequency context modeling of block transform coefficients. Despite the simplicity constraints, coding results show that the proposed coder achieves competitive R-D performance compared to the best wavelet coders in the literature.  相似文献   

5.
Arguably, the most important and defining feature of the JPEG2000 image compression standard is its R-D optimized code stream of multiple progressive layers. This code stream is an interleaving of many scalable code streams of different sample blocks. In this paper, we reexamine the R-D optimality of JPEG2000 scalable code streams under an expected multirate distortion measure (EMRD), which is defined to be the average distortion weighted by a probability distribution of operational rates in a given range, rather than for one or few fixed rates. We prove that the JPEG2000 code stream constructed by embedded block coding of optimal truncation is almost optimal in the EMRD sense for uniform rate distribution function, even if the individual scalable code streams have nonconvex operational R-D curves. We also develop algorithms to optimize the JPEG2000 code stream for exponential and Laplacian rate distribution functions while maintaining compatibility with the JPEG2000 standard. Both of our analytical and experimental results lend strong support to JPEG2000 as a near-optimal scalable image codec in a fairly general setting.  相似文献   

6.
JPEG 2000 is the novel ISO standard for image and video coding. Besides its improved coding efficiency, it also provides a few error resilience tools in order to limit the effect of errors in the codestream, which can occur when the compressed image or video data are transmitted over an error-prone channel, as typically occurs in wireless communication scenarios. However, for very harsh channels, these tools often do not provide an adequate degree of error protection. In this paper, we propose a novel error-resilience tool for JPEG 2000, based on the concept of ternary arithmetic coders employing a forbidden symbol. Such coders introduce a controlled degree of redundancy during the encoding process, which can be exploited at the decoder side in order to detect and correct errors. We propose a maximum likelihood and a maximum a posteriori context-based decoder, specifically tailored to the JPEG 2000 arithmetic coder, which are able to carry out both hard and soft decoding of a corrupted code-stream. The proposed decoder extends the JPEG 2000 capabilities in error-prone scenarios, without violating the standard syntax. Extensive simulations on video sequences show that the proposed decoders largely outperform the standard in terms of PSNR and visual quality.  相似文献   

7.
The rapid growth of image resources on the Internet makes it possible to find some highly correlated images on some Web sites when people plan to transmit an image over the Internet. This study proposes a low bit-rate cloud-based image coding scheme, which utilizes cloud resources to implement image coding. Multiple- discrete wavelet transform was adopted to decompose the input image into a low-frequency sub-band and several high-frequency sub-bands. The low-frequency sub-band image was used to retrieve highly correlated images (HCOIs) in the cloud. The highly correlated regions in the HCOIs were used to reconstruct the high-frequency sub-bands at the decoder to save bits. The final reconstructed image was generated using multiple inverse wavelet transform from a decompressed low-frequency sub-band and reconstructed high-frequency sub-bands. The experimental results showed that the coding scheme performed well, especially at low bit rates. The peak signal-to-noise ratio of the reconstructed image can gain up to 7 and 1.69 dB over JPEG and JPEG2000 under the same compression ratio, respectively. By utilizing the cloud resources, our coding scheme showed an obvious advantage in terms of visual quality. The details in the image can be well reconstructed compared with both JPEG, JPEG2000, and intracoding of HEVC.  相似文献   

8.
实现感兴趣区域编码的通用部分位平面偏移法   总被引:10,自引:1,他引:9  
梁燕  刘文耀郑伟 《光电子.激光》2004,15(11):1334-13,381,342
提出一种通用的部分位平面偏移方法(GPBShift),可克服JPEG2000中定义的两种标准感兴趣区域(ROI)编码方法的局限性。与标准方法中将全部位平面用统一的偏移值进行移位不同,该方法将ROI系数和背景(BG)系数的位平面分别划分成两部分,进行不同的位平面偏移,以控制ROI和BG区的相对重要性。GPBShift方法兼容Maxshift、GBbBShift和PSBShift3种方法,并提供比上述3法更大的灵活性。它不仅能够在不传输任何形状信息的情况下,对任意形状的ROI进行编码,而且通过选择偏移值,能灵活调整ROI和BG区的相对压缩质量。此外,它能够根据不同的优先级,编码多个ROI区域。实验结果显示:该方法在低码率时,能提供比Maxshift方法更好的视觉质量,且比标准中的一般偏移方法(general scaling based method)具有更高的编码效率。  相似文献   

9.
谢慧  王娇  许磊 《电子科技》2010,23(8):15-17
作为新一代静止图像压缩标准的JPEG2000标准拥有压缩比高,支持多分辨率等优点。JPEG2000的编码方式采用了嵌入式码块编码(EBCOT)方式,在编码过程中采用了MQ算术编码。文中分析了它对内容单一、信息量少图像编解码的不足,针对这些不足提出了一种对MQ算术编码器流程的改进算法。这种算法提高了JPEG2000对简单图像压缩的PSNR值,使解码后的图像更加清晰。  相似文献   

10.
A new and improved image coding standard, called JPEG2000, has been developed. JPEG2000 is the state-of-the-art image coding standard that results from the joint efforts of the International Standards Organization (ISO) and the International Telecommunications Union. In this article, we describe the most important parameters of this new standard and present several "tips and tricks" to help resolve the design tradeoffs that JPEG2000 application developers are likely to encounter in practice. The new standard outperforms the older JPEG standard by approximately 2 dB of peak signal-to-noise ratio (PSNR) for several images across all compression ratios. The JPEG2000's superiority from the previous standard largely depends on the standard's security aspects, interactive protocols and application program interfaces for network access, wireless transmission, wavelet transform, and embedded block coding with optimal truncation (EBCOT).  相似文献   

11.
Transmission of image/video messages over communication networks is becoming a standard way of communication due to very efficient compression algorithms that reduce required channel capacity to an acceptable level. However, all compression standard techniques are strongly sensitivitive to channel disturbances and their application is suitable only for practically noiseless channels. In standard noisy channels, the effect of errors on a compressed data bit stream can be divided into two categories: systematic errors defined by the structure of data blocks, and random errors caused by amplitude changes of transmitted components. A systematic error can be detected at the receiver through control of the data stream structure and corrected by error concealment methods or by automatic repeat request (ARQ) procedures. Random errors, noise and burst‐like errors, as well as impulse noise, should be controlled through channel coding. It is reasonable that an integrated source and channel coding methods should be preferred and should give better coding performance. In this paper, a new framework for an image/video coding approach has been presented in which the source and channel coding is integrated in a unique procedure. Image compression is performed in a standard way of the JPEG algorithm based on discrete cosine transform (DCT) and error control coding is based on the real/complex‐number (N,M) BCH code using discrete Fourier transform (DFT) specified with zeros in the time domain, i.e. with roots in the frequency domain. Efficiency of the proposed method is tested on two examples, an one‐dimensional real‐valued time sequence coded by real‐number (20,16) BCH code using DFT, and an example of an image coded by complex (10,8) BCH code using DFT with the correction ability of up to 8 impulses per transmitted 8×8 block. In addition, two decoding methods based on Berlekamp–Massey algorithm (BMA) and the minimum‐norm algorithm (MNA) have also compared. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

12.
Modified JPEG Huffman coding   总被引:3,自引:0,他引:3  
It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method.  相似文献   

13.
针对联合图像专家组(JPEG)标准设计了一种基于自适应下采样和超分辨力重建的图像压缩编码框架。在编码器端,为待编码的原始图像设计了多种不同的下采样模式和量化模式,通过率失真优化算法从多种模式中选择最优的下采样模式(DSM)和量化模式(QM),最后待编码图像将在选择的模式下进行下采样和JPEG编码;在解码器端,采用基于卷积神经网络的超分辨力重建算法对解码后的下采样图像进行重建。此外,所提出的框架扩展到JPEG2000压缩标准下同样有效可行。仿真实验结果表明,相比于主流的编解码标准和先进的编解码方法,提出的框架能有效地提升编码图像的率失真性能,并能获得更好的视觉效果。  相似文献   

14.
JPEG2000实时截断码率控制新算法及其VLSI结构设计   总被引:5,自引:0,他引:5       下载免费PDF全文
提出一种实时编码实时截断的码率控制算法.它根据已分解的小波子带内码块有效位平面数来预测未分解的小波子带内码块有效位平面数,并根据编码通道数和小波/量化权系数为当前编码码块分配码率.并提出一种JPEG2000编码实时截断,两级码率控制的编码体系结构.第一级采用本文提出的算法实时截断码流和编码通道.第二级在低码率下采用JPEG2000标准的PCRD优化算法搜索精确的分层截断点.在最优分层截断之前多数码流和编码通道被预先截断,存储器损耗小,实时性高.低码率下,图像质量跟JPEG2000标准一致.  相似文献   

15.
Wavelet transform coefficients are defined by both a magnitude and a sign. While efficient algorithms exist for coding the transform coefficient magnitudes, current wavelet image coding algorithms are not as efficient at coding the sign of the transform coefficients. It is generally assumed that there is no compression gain to be obtained from entropy coding of the sign. Only recently have some authors begun to investigate this component of wavelet image coding. In this paper, sign coding is examined in detail in the context of an embedded wavelet image coder. In addition to using intraband wavelet coefficients in a sign coding context model, a projection technique is described that allows nonintraband wavelet coefficients to be incorporated into the context model. At the decoder, accumulated sign prediction statistics are also used to derive improved reconstruction estimates for zero-quantized coefficients. These techniques are shown to yield PSNR improvements averaging 0.3 dB, and are applicable to any genre of embedded wavelet image codec.  相似文献   

16.
Currently, wavelet-based coding algorithms are popular for synthetic aperture radar (SAR) image compression, which is very important for reducing the cost of data storage and transmission in relatively slow channels. However, standard wavelet transform is limited by spatial isotropy of its basis functions that is not completely adapted to represent image entities like edges or textures, which means wavelet-based coding algorithms are suboptimal to image compression. In this paper, a novel tree-structured edge-directed orthogonal wavelet packet transform is proposed for SAR image compression. Inspired by the intrinsic geometric structure of images, the new transform improves the performance of standard wavelet by filtering along the regular direction first and then along the orthogonal direction with directional lifting structure. The cost function of best basis selection is designed by textural and directional information for tree-structured edge-directed orthogonal wavelet packet transform. The new transform including speckle reduction can be used to construct SAR image coder with the embedded block coding with optimal truncation for transform coefficients, and arithmetic coding for additional information. The experimental results show that the proposed approach outperforms JPEG2000 and Fast wavelet packet (FWP), both visually and item of PSNR values.  相似文献   

17.
We present an implementable three dimensional terrain adaptive transform based bandwidth compression technique for multispectral imagery. The algorithm exploits the inherent spectral and spatial correlations in the data. The compression technique is based on Karhunen-Loeve transformation for spectral decorrelation followed by the standard JPEG algorithm for coding the resulting spectrally decorrelated eigen images. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near-lossless at about 5:1 CR to visually lossy beginning at about 30:1 CR. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral correlation transformation as a function of the variation of the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a alternate spatial coding procedure. The significant practical advantage of this proposed approach is that it is based on the standard and highly developed JPEG compression technology  相似文献   

18.
Integer mapping is critical for lossless source coding and has been used for multicomponent image compression in the new international image compression standard JPEG 2000. In this paper, starting from block factorizations for any nonsingular transform matrix, we introduce two types of parallel elementary reversible matrix (PERM) factorizations which are helpful for the parallelization of perfectly reversible integer transforms. With improved degree of parallelism and parallel performance, the cost of multiplications and additions can be, respectively, reduced to O(logN) and O(log2N) for an N by N transform matrix. These make PERM factorizations an effective means of developing parallel integer transforms for large matrices. We also present a scheme to block the matrix and allocate the load of processors for efficient transformation  相似文献   

19.
Wavelet transform can decompose images into various multiresolution subbands. In these subbands the correlation exists. A novel technique for image coding by taking advantage of the correlation is addressed. It is based on predictive edge detection from the LL band of the lowest resolution level to predict the edge in the LH, HL and HH bands in the higher resolution level. If the coefficient is predicted as an edge it is preserved; otherwise, it is discarded. In the decoder, the location of the preserved coefficients can also be found as in the encoder. Therefore, no overhead is needed. Instead of complex vector quantization, which is commonly used in subband image coding for high compression ratio, simple scalar quantization is used to code the remaining coefficients and achieves very good results.  相似文献   

20.
三维矩阵彩色图像WDCT压缩编码   总被引:15,自引:0,他引:15       下载免费PDF全文
桑爱军  陈贺新 《电子学报》2002,30(4):594-597
本文深入研究了三维矩阵,首次提出了一种新的基于三维矩阵的宽离散余弦变换(3-D WDCT),并且定义了它的运算准则,证明了它的正反变换和相关性质,将之应用到彩色图像的压缩编码中.我们知道,彩色图像的三帧是同一物理模型的映射,具有相同的纹理,边缘和灰度变化等.本文充分考虑了彩色图像三帧间的结构相似性,把彩色图像的各个分量表示在一个统一的数学模型里,使彩色图像的R,G,B三帧第一次作为一个整体进行压缩编码,同时去除帧间的与帧内的冗余,而不是象其他方法那样把各个分量独立处理.实验结果表明,这种方法的重建图像无论是压缩比,信噪比,还是视觉效果都有明显提高.本文运用并发展了目前较为成熟的离散余弦变换技术,与JPEG和MPEG都有很好的兼容性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号