首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression.  相似文献   

3.
Recently, several efficient context-based arithmetic coding algorithms have been developed successfully for lossless compression of error-diffused images. In this paper, we first present a novel block- and texture-based approach to train the multiple-template according to the most representative texture features. Based on the trained multiple template, we next present an efficient texture- and multiple-template-based (TM-based) algorithm for lossless compression of error-diffused images. In our proposed TM-based algorithm, the input image is divided into many blocks and for each block, the best template is adaptively selected from the multiple-template based on the texture feature of that block. Under 20 testing error-diffused images and the personal computer with Intel Celeron 2.8-GHz CPU, experimental results demonstrate that with a little encoding time degradation, 0.365 s (0.901 s) on average, the compression improvement ratio of our proposed TM-based algorithm over the joint bilevel image group (JBIG) standard [over the previous block arithmetic coding for image compression (BACIC) algorithm proposed by Reavy and Boncelet is 24%] (19.4%). Under the same condition, the compression improvement ratio of our proposed algorithm over the previous algorithm by Lee and Park is 17.6% and still only has a little encoding time degradation (0.775 s on average). In addition, the encoding time required in the previous free tree-based algorithm is 109.131 s on average while our proposed algorithm takes 0.995 s; the average compression ratio of our proposed TM-based algorithm, 1.60, is quite competitive to that of the free tree-based algorithm, 1.62.  相似文献   

4.
Based on the property of binary arithmetic operations and the spatial probability distribution of the prediction error, a set of transformations is proposed for the lossless image compression. The bit plane coding technique is used in the transformed image. The transformations are easily implementable. Experimental results show that these transformations are useful.  相似文献   

5.
We present a compound image compression algorithm for real-time applications of computer screen image transmission. It is called shape primitive extraction and coding (SPEC). Real-time image transmission requires that the compression algorithm should not only achieve high compression ratio, but also have low complexity and provide excellent visual quality. SPEC first segments a compound image into text/graphics pixels and pictorial pixels, and then compresses the text/graphics pixels with a new lossless coding algorithm and the pictorial pixels with the standard lossy JPEG, respectively. The segmentation first classifies image blocks into picture and text/graphics blocks by thresholding the number of colors of each block, then extracts shape primitives of text/graphics from picture blocks. Dynamic color palette that tracks recent text/graphics colors is used to separate small shape primitives of text/graphics from pictorial pixels. Shape primitives are also extracted from text/graphics blocks. All shape primitives from both block types are losslessly compressed by using a combined shape-based and palette-based coding algorithm. Then, the losslessly coded bitstream is fed into a LZW coder. Experimental results show that the SPEC has very low complexity and provides visually lossless quality while keeping competitive compression ratios.  相似文献   

6.
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.  相似文献   

7.
This paper describes an online lossless data-compression method using adaptive arithmetic coding. To achieve good compression efficiency, we employ an adaptive fuzzy-tuning modeler that applies fuzzy inference to deal efficiently with the problem of conditional probability estimation. In comparison with other lossless coding schemes, the compression results of the proposed method are good and satisfactory for various types of source data, Since we adopt the table-lookup approach for the fuzzy-tuning modeler, the design is simple, fast, and suitable for VLSI implementation  相似文献   

8.
A lossless compression scheme for Bayer color filter array images.   总被引:1,自引:0,他引:1  
In most digital cameras, Bayer color filter array (CFA) images are captured and demosaicing is generally carried out before compression. Recently, it was found that compression-first schemes outperform the conventional demosaicing-first schemes in terms of output image quality. An efficient prediction-based lossless compression scheme for Bayer CFA images is proposed in this paper. It exploits a context matching technique to rank the neighboring pixels when predicting a pixel, an adaptive color difference estimation scheme to remove the color spectral redundancy when handling red and blue samples, and an adaptive codeword generation technique to adjust the divisor of Rice code for encoding the prediction residues. Simulation results show that the proposed compression scheme can achieve a better compression performance than conventional lossless CFA image coding schemes.  相似文献   

9.
Since context-based adaptive binary arithmetic coding (CABAC) as the entropy coding method in H.264/AVC was originally designed for lossy video compression, it is inappropriate for lossless video compression. Based on the fact that there are statistical differences of residual data between lossy and lossless video compression, we propose an efficient differential pixel value coding method in CABAC for H.264/AVC lossless video compression. Considering the observed statistical properties of the differential pixel value in lossless coding, we modified the CABAC encoding mechanism with the newly designed binarization table and the context-modeling method. Experimental results show that the proposed method achieves an approximately 12% bit saving, compared to the original CABAC method in the H.264/AVC standard.  相似文献   

10.
一种基于二进制小波变换的无损图像编码算法   总被引:1,自引:0,他引:1  
该文提出了一种基于二进制小波变换的嵌入式无损图像编码算法渐进式分裂二进制小波树编码器(PPBWC)。PPBWC采用混合系数扫描方法按模值大小排序小波系数得到中间符号序列,通过非因果的自适应上下文条件编码考虑了不同频带小波系数的自相似特性,并利用待编码系数的未来信息提高了PPBWC的压缩编码性能。混合系数扫描和非因果的自适应上下文条件编码是PPBWC高效编码的主要因素。实验结果表明,与其它嵌入式无损算法相比,PPBWC具有最优的无损编码性能。  相似文献   

11.
An algorithm for compression of bilevel images   总被引:2,自引:0,他引:2  
This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).  相似文献   

12.
Context-based adaptive variable length coding (CAVLC) and context-based adaptive binary arithmetic coding (CABAC) are entropy coding methods employed in the H.264/AVC standard. Since these entropy coders are originally designed for encoding residual data, which are zigzag scanned and quantized transform coefficients, they cannot provide adequate coding performance for lossless video coding where residual data are not quantized transform coefficients, but the differential pixel values between the original and predicted pixel values. Therefore, considering the statistical characteristics of residual data in lossless video coding, we newly design each entropy coding method based on the conventional entropy coders in H.264/AVC. From the experimental result, we have verified that the proposed method provides not only positive bit-saving of 8% but also reduced computational complexity compared to the current H.264/AVC lossless coding mode.  相似文献   

13.
The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.  相似文献   

14.
An encoding technique called multilevel block truncation coding that preserves the spatial details in digital images while achieving a reasonable compression ratio is described. An adaptive quantizer-level allocation scheme which minimizes the maximum quantization error in each block and substantially reduces the computational complexity in the allocation of optimal quantization levels is introduced. A 3.2:1 compression can be achieved by the multilevel block truncation coding itself. The truncated, or requantized, data are further compressed in a second pass using combined predictive coding, entropy coding, and vector quantization. The second pass compression can be lossless or lossy. The total compression ratios are about 4.1:1 for lossless second-pass compression, and 6.2:1 for lossy second-pass compression. The subjective results of the coding algorithm are quite satisfactory, with no perceived visual degradation  相似文献   

15.
Embedded image coding using zerotrees of wavelet coefficients   总被引:20,自引:0,他引:20  
The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. The embedded code represents a sequence of binary decisions that distinguish an image from the “null” image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images. Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source. The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding  相似文献   

16.
Prioritized DCT for compression and progressive transmission ofimages   总被引:2,自引:0,他引:2  
An approach is based on the block discrete cosine transform (DCT). The novelty of this approach is that the transform coefficients of all image blocks are coded and transmitted in absolute magnitude order. The resulting ordered-by-magnitude transmission is accomplished without sacrificing coding efficiency by using partition priority coding. Coding and transmission are adaptive to the characteristics of each individual image. and therefore, very efficient. Another advantage of this approach is its high progression effectiveness. Since the largest transform coefficients that capture the most important characteristics of images are coded and transmitted first, this method is well suited for progressive image transmission. Further compression of the image-data is achieved by multiple distribution entropy coding, a technique based on arithmetic coding. Experiments show that the approach compares favorably with previously reported DCT and subband image codecs.  相似文献   

17.
The discrete cosine transform (DCT) has been shown as an optimum encoder for sharp edges in an image (Andrew and Ogunbona, 1997). A conventional lossless coder employing differential pulse code modulation (DPCM) suffers from significant deficiencies in regions of discontinuity, because the simple model cannot capture the edge information. This problem can be partially solved by partitioning the image into blocks that are supposedly statistically stationary. A hybrid lossless adaptive DPCM (ADPCM)/DCT coder is presented, in which the edge blocks are encoded with DCT, and ADPCM is used for the non-edge blocks. The proposed scheme divides each input image into small blocks and classifies them, using shape vector quantisation (VQ), as either edge or smooth. The edge blocks are further vector quantised, and the side information of the coefficient matrix is saved through the shape-VQ index. Evaluation of the compression performance of the proposed method reveals its superiority over other lossless coders  相似文献   

18.
A new approach for black and white image compression is described, with which the eight CCITT test documents can be compressed in a lossless manner 20-30 percent better than with the best existing compression algorithms. The coding and the modeling aspects are treated separately. The key to these improvements is an efficient binary arithmetic code. The code is relatively simple to implement because it avoids the multiplication operation inherent in some earlier arithmetic codes. Arithmetic coding permits the compression of binary sequences where the statistics change on a bit-to-bit basis. Model statistics are studied from stationary, stationary adaptive, and nonstationary adaptive assumptions.  相似文献   

19.
This study introduces error control to the block arithmetic coding for image compression (BACIC): a new method for lossless bilevel image compression. BACIC can successfully transmit bilevel images when channel bit error rates are as high as 10/sup -3/ while providing compression ratios twice that of G3, the only facsimile standard which incorporates error control into its algorithm.  相似文献   

20.
结合经典无损预测编码方法和变长编码思想,提出了一种有自适应性的预测分组编码方法.通过自适应预测编码对不同纹理特征的图像均能有效地做去相关性处理,再结合图像数据分布规律通过分组编码实现图像的无损压缩.实验结果表明这套方法具有高压缩比和高压缩效率的特点.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号