首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Tree coding of bilevel images   总被引:1,自引:0,他引:1  
Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional probabilities to an arithmetic coder. The conditional probabilities are estimated from co-occurrence statistics of past pixels, the statistics are stored in a tree. By organizing the code length calculations properly, a vast number of possible models (trees) reflecting different pixel orderings can be investigated within reasonable time prior to generating the code. A number of general-purpose coders are constructed according to this principle. Rissanen's (1989) one-pass algorithm, context, is presented in two modified versions. The baseline is proven to be a universal coder. The faster version, which is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding  相似文献   

2.
提出了一种自适应最短码长熵编码方法,用加权后的条件概率分布进行Context建模,最后用算术编码对图像进行编码,从而达到压缩效果。  相似文献   

3.
Mutual information-based analysis of JPEG2000 contexts.   总被引:4,自引:0,他引:4  
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.  相似文献   

4.
In this paper, an accurate rate model is proposed for inter-frame coding of high-efficiency video coding, which is useful for rate control. The proposed model considers the effect of entropy coding where the inter-symbol dependency is exploited in context- adaptive binary arithmetic coding (CABAC) for saving coded bits. The mutual information is first predicted to measure the reduction of uncertain information in CABAC, and then the conditional entropy is calculated to estimate the output bit-rate of inter-frame residues. Since the source characteristic also significantly impacts on the building of rate model, a joint Laplacian distribution source at the transform unit levels is employed in the proposed rate model. The experimental results show that the proposed model achieves a better rate-distortion performance in rate control. The proposed approach can be also extended to other video codecs using CABAC for the design of rate models.  相似文献   

5.
A novel scheme for low-power image coding and decoding based on classified vector quantisation is presented. The main idea is the replacement of the memory accesses to large background memories (most power-consuming operations), by arithmetic and/or application-specific computations. Specifically, the proposed image coding scheme uses small sub-codebooks to reduce the memory requirements and memory-related power consumption in comparison with classical vector quantisation schemes. By applying simple transformations on the codewords during coding, the proposed scheme extends the small sub-codebooks, compensating for the quality degradation introduced by their small size. Thus, the main coding task becomes computation-based rather than memory-based, leading to a significant reduction in power consumption. The proposed scheme achieves image qualities comparable with, or better than, those of traditional vector quantisation schemes, as the parameters of the transformations depend on the image block under coding, and the small sub-codebooks are dynamically adapted each time to this specific image block. The main disadvantage of the proposed scheme is the decrease in the compression ratio in comparison with classical vector quantisation. A joint (quality-compression ratio) optimisation procedure is used to keep this side-effect as small as possible  相似文献   

6.
JPEG2000 is a recently standardized image compression algorithm. The heart of this algorithm is the coding scheme known as embedded block coding with optimal truncation (EBCOT). This contributes the majority of processing time to the compression algorithm. The EBCOT scheme consists of a bit-plane coder coupled to a MQ arithmetic coder. Recent bit-plane coder architectures are capable of producing symbols at a higher rate than the existing MQ arithmetic coders can absorb. Thus, there is a requirement for a high throughput MQ arithmetic coder. We examine the existing MQ arithmetic coder architectures and develop novel techniques capable of absorbing the high symbol rate from high performance bit-plane coders, as well as providing flexible design choices.  相似文献   

7.
An efficient coding system for long source sequences   总被引:2,自引:0,他引:2  
The Elias source coding scheme is modified to permit a source sequence of practically unlimited length to be coded as a single codeword using arithmetic of only limited precision. The result is shown to be a nonblock arithmetic code of the first in, first out (FIFO) type-- source symbols are decoded in the same order as they were encoded. Codeword lengths which are near optimum for the specified statistical properties of the source can be achieved. Explicit encoding and decoding algorithms are Provided which effectively implement the coding scheme. Applications to data compression and cryptography are suggested.  相似文献   

8.
基于帧间去相关的超光谱图像压缩方法   总被引:7,自引:1,他引:6  
针对超光谱图像的特点和硬件实现的实际需要,提出了一种基于小波变换的前向预测帧间去相关超光谱图像压缩算法。通过图像匹配和帧间去相关,消除超光谱图像帧间的冗余,对残差图像的压缩采用基于小波变换的快速位平面结合自适应算术编码的压缩算法,按照率失真准则控制输出码流,实现了对超光谱图像的高保真压缩。通过实验证明了该方案的有效性,基于小波变换的快速位平面结合自适应算术编码的压缩算法速度优于SPIHT,而且易于硬件实现。  相似文献   

9.
In the process of quantisation, a lattice vector quantiser (LVQ) generates radius and index sequences. In lossless coding, the radius sequence is run-length coded and then Huffman or arithmetic coded, and the index sequence is represented by fixed binary bits. The author has improved the LVQ lossless coding by removing the redundant information between radius sequence and index sequence. An algorithm is developed that redistributes radius and index sequences. The algorithm adaptively shifts down large indices to smaller values and reduces the index bits. Hence, the proposed LVQ lossless coding method reduces the gap between actual coding bit rates and the optimal bit rate boundary. For a Laplacian source the proposed lossless coding scheme achieves more than 10% of bit reduction at bit rates higher than 0.7 bits/sample over the traditional lossless coding method  相似文献   

10.
This paper describes an online lossless data-compression method using adaptive arithmetic coding. To achieve good compression efficiency, we employ an adaptive fuzzy-tuning modeler that applies fuzzy inference to deal efficiently with the problem of conditional probability estimation. In comparison with other lossless coding schemes, the compression results of the proposed method are good and satisfactory for various types of source data, Since we adopt the table-lookup approach for the fuzzy-tuning modeler, the design is simple, fast, and suitable for VLSI implementation  相似文献   

11.
吴蒙  谷坊祝  朱琦  叶卫明 《信号处理》2005,21(3):257-260
本文对802.16a的编码方案进行了讨论,研究了RS-CC编译码算法,并根据DSP实现的特点对RS的译码算法进行了改进,同时给出RS-CC编解码系统的DSP设计思路和实现方案,最后采用TMS320C6201实现了该编解码器。译码速率可达600kb/s。  相似文献   

12.
On entropy-constrained vector quantization using gaussian mixture models   总被引:2,自引:0,他引:2  
A flexible and low-complexity entropy-constrained vector quantizer (ECVQ) scheme based on Gaussian mixture models (GMMs), lattice quantization, and arithmetic coding is presented. The source is assumed to have a probability density function of a GMM. An input vector is first classified to one of the mixture components, and the Karhunen-Lo`eve transform of the selected mixture component is applied to the vector, followed by quantization using a lattice structured codebook. Finally, the scalar elements of the quantized vector are entropy coded sequentially using a specially designed arithmetic coder. The computational complexity of the proposed scheme is low, and independent of the coding rate in both the encoder and the decoder. Therefore, the proposed scheme serves as a lower complexity alternative to the GMM based ECVQ proposed by Gardner, Subramaniam and Rao [1]. The performance of the proposed scheme is analyzed under a high-rate assumption, and quantified for a given GMM. The practical performance of the scheme was evaluated through simulations on both synthetic and speech line spectral frequency (LSF) vectors. For LSF quantization, the proposed scheme has a comparable performance to [1] at rates relevant for speech coding (20-28 bits per vector) with lower computational complexity.  相似文献   

13.
This letter proposes a complexity reduction method to speed up the noiseless decoding of a bit-sliced arithmetic coding (BSAC) decoder. This scheme fully utilizes the group of consecutive arithmetic-coded symbols known as the decoding band and the significance tree structure sorted in order of significance at every decoding band. With the same audio quality, the proposed method reduces the number of calculations that are performed during the noiseless decoding in BSAC to about 22% of the amount of calculations with the conventional full-search method.  相似文献   

14.
We propose a novel adaptive arithmetic coding method that uses dual symbol sets: a primary symbol set that contains all the symbols that are likely to occur in the near future and a secondary symbol set that contains all other symbols. The simplest implementation of our method assumes that symbols that have appeared in the previously are highly likely to appear in the near future. It therefore fills the primary set with symbols that have occurred in the previously. Symbols move dynamically between the two symbol sets to adapt to the local statistics of the symbol source. The proposed method works well for sources, such as images, that are characterized by large alphabets and alphabet distributions that are skewed and highly nonstationary. We analyze the performance of the proposed method and compare it to other arithmetic coding methods, both theoretically and experimentally. We show experimentally that in certain contexts, e.g., with a wavelet-based image coding scheme that has appeared in the literature, the compression performance of the proposed method is better than that of the conventional arithmetic coding method and the zero-frequency escape arithmetic coding method.  相似文献   

15.
H.264/AVC标准中基于CABAC的数字视频加密研究   总被引:2,自引:0,他引:2  
分析和总结了用于新一代视频编码标准H.264/AVC加密的候选域,在此基础上提出了一种新的基于CABAC(基于上下文的自适应二进制算术编码)的数字视频加密方案,并给出了2种安全加密操作:RCME(规则编码模式加密)和BCME(旁路编码模式加密)实现了残差系数码字、运动矢量差码字和帧内预测模式的加密保护。实验结果表明,该方案具有较好的安全性、编码效率和误码顽健性。  相似文献   

16.
We investigate the problem of averaging values on lattices and, in particular, on discrete product lattices. This problem arises in image processing when several color values given in RGB, HSL, or another coding scheme need to be combined. We show how the arithmetic mean and the median can be constructed by minimizing appropriate penalties, and we discuss which of them coincide with the Cartesian product of the standard mean and the median. We apply these functions in image processing. We present three algorithms for color image reduction based on minimizing penalty functions on discrete product lattices.  相似文献   

17.
Recently the wavelet-based contourlet transform (WBCT) is adopted for image coding because it matches better image textures of different orientations. However, its computational complexity is very high. In this paper, we propose three tools to enhance the WBCT coding scheme, in particular, on reducing its computational complexity. First, we propose short-length 2-D filters for directional transform. Second, the directional transform is applied to only a few selected subbands and the selection is done by a mean-shift-based decision procedure. Third, we fine-tune the context tables used by the arithmetic coder in WBCT coding to improve coding efficiency and to reduce computation. Simulations show that, at comparable coded image quality, the proposed scheme saves over 92% computing time of the original WBCT scheme. Comparing to the conventional 2-D wavelet coding schemes, it produces clearly better subjective image quality.  相似文献   

18.
The encoding schemes utilize the first- and second-order Markov models to describe the source structure. Two coding techniques, Huffman encoding and arithmetic encoding, are used to achieve a high coding efficiency. Universal code tables which match the statistics of contour line drawings obtained from 64 contour maps are presented and can be applied to encode all contour line drawings with chain code representations. Experiments have shown about a 50% improvement on the code amount over the conventional chain encoding scheme with arithmetic coding schemes, and also have shown a compression rate comparable to that obtained by T. Kaneko and M. Okudaira (1985) with Huffman coding schemes, while this implementation is substantially simpler  相似文献   

19.
We propose a distributed binary arithmetic coder for Slepian-Wolf coding with decoder side information, along with a soft joint decoder. The proposed scheme provides several advantages over existing schemes, and its performance is equal to or better than that of an equivalent scheme based on turbo codes at short and medium block lengths.  相似文献   

20.
杨胜天 《通信学报》2004,25(3):119-125
通过使用查表、概率量化和修改的重正化操作的方法,给出了一个简单的无乘法的二值分组算术编码方案。实验结果显示其平均码字长度比常用的快速编码器更接近于源编码理论的理论下限。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号