首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 93 毫秒
1.
H.264/AVC中二进制算术编码的分析与研究   总被引:1,自引:0,他引:1  
张杰  童胜 《电子科技》2003,(24):28-31
算术编码是一种高效的熵编码方法,已经广泛应用于图像和视频编码中。文中简述了算术编码的基本原理,介绍了可行的算术编码算法,详细分析了H.264/AVC的CABAC中采用的自适应二进制算术编码的算法并对其性能进行了测试。  相似文献   

2.
一种改进的二进制自适应算术编码数据转换方法   总被引:2,自引:1,他引:1  
刘颖  刘勃 《通信学报》1997,18(6):93-96
本文提出了一种用于二进制自适应算术编码的数据转换方法,并通过实验和理论分析,与直接数据转换法、适于灰度图像的数据转换法进行了比较,证明改进的转换方法效率更高  相似文献   

3.
提出了一种基于推广的Shannon熵代价函数的快速自适应小波包算法。该算法在小波包分解过程中,先对图像数据的边界进行对称延拓,最后提出了一种与零树编码、自适应算术编码相结合的小波包图像压缩算法。  相似文献   

4.
一种基于运动对象的形状编码新算法   总被引:3,自引:0,他引:3       下载免费PDF全文
本文提出一种基于运动对象的形状编码新方法,称之为基于曲率尺度空间CSS(Curvature Scale Space)的自适应算术形状编码算法(CSSAS).本算法主要分为两种编码模式:帧内模式和帧间模式.在帧内模式中,我们在对CSS算法进行改进的基础上,对任意形状对象的形状信息进行特征点的分层提取;并用自适应算术编码算法对提取的特征点进行编码.在帧间模式中,提出了一种基于曲率尺度空间图(CSSI)的任意形状对象的运动估计算法.上述运动估计/补偿后所获得的形状曲线的匹配部分使用基于弧长索引的压缩编码算法,而对于不匹配部分,采用与帧内CSSAS算法相同的方法进行压缩编码.实验结果表明,本文提出的CSSAS算法与MPEG-4校验模型中基于上下文的算术形状编码算法(CAE)相比,在帧内模式时,CSSAS比CAE压缩比提高约25%,在帧间模式Dn较大时,CSSAS比CAE压缩比明显提高,而在重构形状的主观质量上,CSSAS这两种模式均优于CAE.  相似文献   

5.
自适应交互多模型跟踪算法的模型集设计   总被引:5,自引:0,他引:5  
自适应交互多模型算法(AIMM)是标准交互多模型算法(IMM)的一种改进。但AIMM遇到了新的问题,包括如何选择自适应模型集的结构,如何从基于旧模型集的滤波器中继承各种数据。本文分析了这些问题,并给出了AIMM中模型集和模型转移概率的设计方法。仿真结果表明,改进的AIMM算法比普通的AIMM算法的跟踪性能有明显的提高。  相似文献   

6.
本文提出了一种云环境下基于二进制编码的并行频繁项集挖掘算法,利用一种特殊的二进制编码的依赖度计量方法对原始数据集合进行编码转换及依赖度聚类,然后将数据集分布部署在云环境中,并采用共享多头表的FP-Growth并行改进算法挖掘频繁项集.实验表明,对于大规模数据集来说,本文算法可以取得良好的性能.  相似文献   

7.
给出了MPEG-4中基于小波变换的静态纹理编码,包括最低频带和高频子带的自适应算术编码的算法和仿真实验结果。  相似文献   

8.
王珏  王嘉 《信息技术》2010,(4):26-29
提出了一种对网格编码量化(TCQ)算法进行改进的方法-自适应网格编码量化算法(ADTCQ),并将其应用于JPEG2000图像压缩.自适应网格编码量化算法(ADTCQ)采用了多级TCQ的结构,并且通过利用已处理的数据对网格结构做出自适应调整.实验结果显示,应用于图像压缩时自适应网格编码量化(ADTCQ)算法的性能要明显优于标准TCQ算法.  相似文献   

9.
在分析了多普勒信号的特性及语音信号的区别后,采用自适应离散余弦变换算法对多普勒信号进行了压缩编码,在中等编码速率下得到了较好的压缩编码效果。文中提出了相关-自适应离散余弦变换(C-ADCT)压缩编码算法,改进算法提高了自适应离散余弦变换算法的抗噪性能。  相似文献   

10.
基于率失真优化的递进UTCQ编码   总被引:1,自引:0,他引:1  
本文提出了一种基于UTCQ量化器的递进静态图像小波编码算法。一致网格编码量化(UTCQ)用于小波系数的量化并得到了非常好的量化效果。UTCQ超集索引值构成系数位平面,率失真优化按照率失真斜率递减的顺序从系数位平面选择编码系数位。最先编码的位具有最大的率失真斜率,每编码一位都会使失真减少最大。率失真斜率的计算仅仅是利用MQ自适应算术编码器的概率状态估计表而进行的查表过程。MQ算术编码器进一步压缩率失真优化选择的系数位。率失真门限方法的编码速度比搜索最大的率失真斜率更快。该算法有较快的编码速度以及好的压缩效果。  相似文献   

11.
We propose a novel adaptive arithmetic coding method that uses dual symbol sets: a primary symbol set that contains all the symbols that are likely to occur in the near future and a secondary symbol set that contains all other symbols. The simplest implementation of our method assumes that symbols that have appeared in the previously are highly likely to appear in the near future. It therefore fills the primary set with symbols that have occurred in the previously. Symbols move dynamically between the two symbol sets to adapt to the local statistics of the symbol source. The proposed method works well for sources, such as images, that are characterized by large alphabets and alphabet distributions that are skewed and highly nonstationary. We analyze the performance of the proposed method and compare it to other arithmetic coding methods, both theoretically and experimentally. We show experimentally that in certain contexts, e.g., with a wavelet-based image coding scheme that has appeared in the literature, the compression performance of the proposed method is better than that of the conventional arithmetic coding method and the zero-frequency escape arithmetic coding method.  相似文献   

12.
Linear time adaptive arithmetic coding   总被引:1,自引:0,他引:1  
The issue of how arithmetic coding should be implemented is addressed. A data structure is described and shown to support adaptive arithmetic coding on an arbitrary-sized alphabet in time linear in the size of the inputs and outputs. Experimental results that show the method to be useful even on relatively small alphabets are given  相似文献   

13.
We propose an optimal buffered compression algorithm for shape coding as defined in the forthcoming MPEG-4 international standard. The MPEG-4 shape coding scheme consists of two steps: first, distortion is introduced by down and up scaling; then, context-based arithmetic encoding is applied. Since arithmetic coding is "lossless," the down up scaling step is considered as a virtual quantizer. We first formulate the buffer-constrained adaptive quantization problem for shape coding, and then propose an algorithm for the optimal solution under buffer constraints. Previously, the fact that a conversion ratio (CR) of 1/4 makes a coded image irritating to human observers for QCIF size was reported for MPEG-4 shape coding. Therefore, careful consideration for small size images such as QCIF should be given to prevent coded images from being unacceptable. To this end, a low bit rate tuned algorithm is proposed in this paper as well. Experimental results are given using an MPEG-4 shape codec.  相似文献   

14.
介绍了一种基于上下文的树形加权概率统计算法,基本上解决了高阶马氏链的概率统计问题,提出了一种边统计边编码的自适应算术编码方案,并将其应用于高质量的语音二次压缩算法中。  相似文献   

15.
This paper describes an online lossless data-compression method using adaptive arithmetic coding. To achieve good compression efficiency, we employ an adaptive fuzzy-tuning modeler that applies fuzzy inference to deal efficiently with the problem of conditional probability estimation. In comparison with other lossless coding schemes, the compression results of the proposed method are good and satisfactory for various types of source data, Since we adopt the table-lookup approach for the fuzzy-tuning modeler, the design is simple, fast, and suitable for VLSI implementation  相似文献   

16.
Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic coding to be largely unsuitable for encryption. Nevertheless, adaptive modeling, which offers a huge model, variable in structure, and as completely as possible a function of the entire text that has been transmitted since the time the model was initialized, is a suitable candidate for a possible encryption-compression combine. The focus of the work presented in this paper has been to incorporate recent results of chaos theory, proven to be cryptographically secure, into arithmetic coding, to devise a convenient method to make the structure of the model unpredictable and variable in nature, and yet to retain, as far as is possible, statistical harmony, so that compression is possible. A chaos-based adaptive arithmetic coding-encryption technique has been designed, developed and tested and its implementation has been discussed. For typical text files, the proposed encoder gives compression between 67.5% and 70.5%, the zeroth-order compression suffering by about 6% due to encryption, and is not susceptible to previously carried out attacks on arithmetic coding algorithms.  相似文献   

17.
In this paper, we propose a new approach for block-based lossless image compression by defining a new semiparametric finite mixture model-based adaptive arithmetic coding. Conventional adaptive arithmetic encoders start encoding a sequence of symbols with a uniform distribution, and they update the frequency of each symbol by incrementing its count after it has been encoded. When encoding an image row by row or block by block, conventional adaptive arithmetic encoders provide the same compression results. In addition, images are normally non-stationary signals, which means that different areas in an image have different probability distributions, so conventional adaptive arithmetic encoders which provide probabilities for the whole image are not very efficient. In the proposed compression scheme, an image is divided into non-overlapping blocks of pixels, which are separately encoded with an appropriate statistical model. Hence, instead of starting to encode each block with a uniform distribution, we propose to start with a probability distribution which is modeled by a semiparametric mixture obtained from the distributions of its neighboring blocks. The semiparametric model parameters are estimated through maximum likelihood using the expectation–maximization algorithm in order to maximize the arithmetic coding efficiency. The results of comparative experiments show that we provide significant improvements over conventional adaptive arithmetic encoders and the state-of-the-art lossless image compression standards.  相似文献   

18.
Context-based adaptive variable length coding (CAVLC) and context-based adaptive binary arithmetic coding (CABAC) are entropy coding methods employed in the H.264/AVC standard. Since these entropy coders are originally designed for encoding residual data, which are zigzag scanned and quantized transform coefficients, they cannot provide adequate coding performance for lossless video coding where residual data are not quantized transform coefficients, but the differential pixel values between the original and predicted pixel values. Therefore, considering the statistical characteristics of residual data in lossless video coding, we newly design each entropy coding method based on the conventional entropy coders in H.264/AVC. From the experimental result, we have verified that the proposed method provides not only positive bit-saving of 8% but also reduced computational complexity compared to the current H.264/AVC lossless coding mode.  相似文献   

19.
A new approach for black and white image compression is described, with which the eight CCITT test documents can be compressed in a lossless manner 20-30 percent better than with the best existing compression algorithms. The coding and the modeling aspects are treated separately. The key to these improvements is an efficient binary arithmetic code. The code is relatively simple to implement because it avoids the multiplication operation inherent in some earlier arithmetic codes. Arithmetic coding permits the compression of binary sequences where the statistics change on a bit-to-bit basis. Model statistics are studied from stationary, stationary adaptive, and nonstationary adaptive assumptions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号