首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 819 毫秒
1.
Context-based, adaptive, lossless image coding   总被引:3,自引:0,他引:3  
We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts  相似文献   

2.
Context-based lossless interband compression-extending CALIC   总被引:14,自引:0,他引:14  
This paper proposes an interband version of CALIC (context-based, adaptive, lossless image codec) which represents one of the best performing, practical and general purpose lossless image coding techniques known today. Interband coding techniques are needed for effective compression of multispectral images like color images and remotely sensed images. It is demonstrated that CALIC's techniques of context modeling of DPCM errors lend themselves easily to modeling of higher-order interband correlations that cannot be exploited by simple interband linear predictors alone. The proposed interband CALIC exploits both interband and intraband statistical redundancies, and obtains significant compression gains over its intrahand counterpart. On some types of multispectral images, interband CALIC can lead to a reduction in bit rate of more than 20% as compared to intraband CALIC. Interband CALIC only incurs a modest increase in computational cost as compared to intraband CALIC.  相似文献   

3.
Context-based modeling is an important step in high-performance lossless data compression. To effectively define and utilize contexts for natural images is, however, a difficult problem. This is primarily due to the huge number of contexts available in natural images, which typically results in higher modeling costs, leading to reduced compression efficiency. Motivated by the prediction by partial matching context model that has been very successful in text compression, we present prediction by partial approximate matching (PPAM), a method for compression and context modeling for images. Unlike the PPM modeling method that uses exact contexts, PPAM introduces the notion of approximate contexts. Thus, PPAM models the probability of the encoding symbol based on its previous contexts, whereby context occurrences are considered in an approximate manner. The proposed method has competitive compression performance when compared with other popular lossless image compression algorithms. It shows a particularly superior performance when compressing images that have common features, such as biomedical images.  相似文献   

4.
A level-embedded lossless compression method for continuous-tone still images is presented. Level (bit-plane) scalability is achieved by separating the image into two layers before compression and excellent compression performance is obtained by exploiting both spatial and inter-level correlations. A comparison of the proposed scheme with a number of scalable and non-scalable lossless image compression algorithms is performed to benchmark its performance. The results indicate that the level-embedded compression incurs only a small penalty in compression efficiency over non-scalable lossless compression, while offering the significant benefit of level-scalability.  相似文献   

5.
LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS.  相似文献   

6.
Ng  K.S. Cheng  L.M. 《Electronics letters》1999,35(20):1716-1717
A data re-ordering technique is proposed to enhance the compression performance of lossless LZW image compression over continuous-tone images. The proposed scheme incurs few overheads and functions as a preprocessing stage prior to actual compression. The compression performance is increased by an average of 18% for continuous-tone test images compared with the popular GIF compressor  相似文献   

7.
The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.  相似文献   

8.
刘彦  王玲 《信息技术》2004,28(4):46-48
指纹图像的无损压缩解决方法,是从指纹数据压缩的基础开始分析,对已经采集的指纹数据进行DPCM线性预测,随后采取霍夫曼编码处理,通过信道后,进行霍夫曼解码和DPCM预测解码。在完成整个编解码的过程中,突破了传统的DPCM线性预测方法,针对指纹图像的纹理特征,构出指纹图像特有的方向图,借助于各个象素的方向向量进行线性预测。实验结果表明引入了方向向量,将增强预测的效果,大幅度的提高压缩比。数据和图像证明了这一方法的实用性和合理性。  相似文献   

9.
Lossless compression of continuous-tone images   总被引:3,自引:0,他引:3  
In this paper, we survey some of the recent advances in lossless compression of continuous-tone images. The modeling paradigms underlying the state-of-the-art algorithms, and the principles guiding their design, are discussed in a unified manner. The algorithms are described and experimentally compared  相似文献   

10.
多光谱图像的信息分析及数据压缩   总被引:1,自引:1,他引:0  
蒋青松  王建宇 《红外技术》2004,26(1):44-47,51
首先利用条件熵对成像光谱仪多光谱图像在空间维和光谱维方向的信息冗余度进行了分析,结果表明成像光谱图像在空间维具有很强的相关性,而在光谱维方向,图像信息不平稳,相关性略小。然后对标准的基于JPEG量化表进行改进,提出了一种既能有效保真图像边缘信息,又能提高压缩倍数的改进JPEG压缩算法-I-JPEG。随后又提出了I-JPEG/DPCM压缩算法。这种方法在I-JPEG基础上,利用局部化特性较好的无损DPCM方法去除图像在光谱维的相关性,使压缩性能得到进一步提高。  相似文献   

11.
由于无线带宽的有限性,基于红外序列图像自身独特的特点,工程需要对红外序列图像进行无损压缩.通过比较JPEG 2000、SPIHT、JPEG + DPCM和JPEG_LS这4种算法的压缩性能和硬件实现复杂度,选取了性能最优易于硬件实现的JPEG_LS作为核心压缩算法,以DSP和FPGA作为硬件平台核心.为验证系统的可行性...  相似文献   

12.
一种基于小波系数上下文模型的图像压缩方法   总被引:2,自引:3,他引:2  
提出了二种新颖的基于小波系数上下文模型的图像压缩方法,该方法通过量化当前系数的线性预测值形成上下文.进行自适应的算术编码;同时利用了小波变换的多分辨率性质,以渐近分辨率的方式压缩图片,具有分辨率可扩展性。实验结果表明,该方法获得的无损压缩比高于SPIHT和用于JPEG2000的EBCOT,在各分辨率下的压缩比也高于EBCOT.压缩时间也比EBCOT要少。  相似文献   

13.
多光谱遥感图像的压缩要利用图像谱内及谱间的相关特性。该文在分析多光谱图像谱内和谱间相关特性的基础上,提出了对多光谱遥感图像进行压缩的分段 DPCM和 SPIHT相结合的混合压缩算法,即首先利用分段 DPCM算法去除谱间冗余,再利用高效的 SPIHT小波压缩算法对预测误差图像进行编码。实验取得了令人满意的效果,证明了该算法的有效性。  相似文献   

14.
The discrete cosine transform (DCT) has been shown as an optimum encoder for sharp edges in an image (Andrew and Ogunbona, 1997). A conventional lossless coder employing differential pulse code modulation (DPCM) suffers from significant deficiencies in regions of discontinuity, because the simple model cannot capture the edge information. This problem can be partially solved by partitioning the image into blocks that are supposedly statistically stationary. A hybrid lossless adaptive DPCM (ADPCM)/DCT coder is presented, in which the edge blocks are encoded with DCT, and ADPCM is used for the non-edge blocks. The proposed scheme divides each input image into small blocks and classifies them, using shape vector quantisation (VQ), as either edge or smooth. The edge blocks are further vector quantised, and the side information of the coefficient matrix is saved through the shape-VQ index. Evaluation of the compression performance of the proposed method reveals its superiority over other lossless coders  相似文献   

15.
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.  相似文献   

16.
The near-lossless, i.e., lossy but high-fidelity, compression of medical Images using the entropy-coded DPCM method is investigated. A source model with multiple contexts and arithmetic coding are used to enhance the compression performance of the method. In implementing the method, two different quantizers each with a large number of quantization levels are considered. Experiments involving several MR (magnetic resonance) and US (ultrasound) images show that the entropy-coded DPCM method can provide compression in the range from 4 to 10 with a peak SNR of about 50 dB for 8-bit medical images. The use of multiple contexts is found to improve the compression performance by about 25% to 30% for MR images and 30% to 35% for US images. A comparison with the JPEG standard reveals that the entropy-coded DPCM method can provide about 7 to 8 dB higher SNR for the same compression performance.  相似文献   

17.
基于整数小波变换的准无失真图像压缩技术   总被引:6,自引:0,他引:6       下载免费PDF全文
田金文  柳斌  柳健 《电子学报》2000,28(4):64-68
 本文首先讨论了一般整数小波的构造方法,然后利用分块DPCM与整数小波变换进行遥感图像的准无失真压缩,该方法可进行实时处理,硬件实现简单,可并行处理,实验结果表明,该方法是一种有效遥感图像压缩方法.  相似文献   

18.
张雷  杨阳 《红外》2013,34(12):25-29
最近空间数据系统咨询委员会(Consultative Committee for Space Data System,CCSDS)正式发布了CCSDS 123.0-B-1标准.该标准是星载高光谱图像无损压缩以及近无损压缩的国际通用标准.为了减少图像数据存储的容量,降低传输带宽,提高传输速率以及实现实时传输,本文简要介绍了最新的CCSDS算法标准,并采用该算法标准解决了高光谱图像的大容量问题.基于现场可编程门阵列(Field-Programmable Gate Array,FPGA)硬件的逻辑实现包括预测器的逻辑描述和编码器的逻辑描述.最后,比较了基于FPGA硬件逻辑实现的高光谱图像无损压缩仿真以及该算法和JPEG-LS算法的压缩特性.结果表明, CCSDS压缩算法可满足高光谱图像压缩比为2:1的要求.  相似文献   

19.
Dictionary design for text image compression with JBIG2   总被引:1,自引:0,他引:1  
The JBIG2 standard for lossy and lossless bilevel image coding is a very flexible encoding strategy based on pattern matching techniques. This paper addresses the problem of compressing text images with JBIG2. For text image compression, JBIG2 allows two encoding strategies: SPM and PM&S. We compare in detail the lossless and lossy coding performance using the SPM-based and PM&S-based JBIG2, including their coding efficiency, reconstructed image quality and system complexity. For the SPM-based JBIG2, we discuss the bit rate tradeoff associated with symbol dictionary design. We propose two symbol dictionary design techniques: the class-based and tree-based techniques. Experiments show that the SPM-based JBIG2 is a more efficient lossless system, leading to 8% higher compression ratios on average. It also provides better control over the reconstructed image quality in lossy compression. However, SPM's advantages come at the price of higher encoder complexity. The proposed class-based and tree-based symbol dictionary designs outperform simpler dictionary formation techniques by 8% for lossless and 16-18% for lossy compression  相似文献   

20.
Cascaded differential and wavelet compression of chromosome images   总被引:2,自引:0,他引:2  
This paper proposes a new method for chromosome image compression based on an important characteristic of these images: the regions of interest (ROIs) to cytogeneticists for evaluation and diagnosis are well determined and segmented. Such information is utilized to advantage in our compression algorithm, which combines lossless compression of chromosome ROIs with lossy-to-lossless coding of the remaining image parts. This is accomplished by first performing a differential operation on chromosome ROIs for decorrelation, followed by critically sampled integer wavelet transforms on these regions and the remaining image parts. The well-known set partitioning in hierarchical trees (SPIHT) (Said and Perlman, 1996) algorithm is modified to generate separate embedded bit streams for both chromosome ROIs and the rest of the image that allow continuous lossy-to-lossless compression of both (although lossless compression of the former is commonly used in practice). Experiments on two sets of sample chromosome spread and karyotype images indicate that the proposed approach significantly outperforms current compression techniques used in commercial karyotyping systems and JPEG-2000 compression, which does not provide the desirable support for lossless compression of arbitrary ROIs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号