共查询到18条相似文献,搜索用时 15 毫秒
1.
2.
CABAC熵编码机制下有效的变换域码率估计技术 总被引:1,自引:1,他引:0
CABAC是H.264/AVC视频压缩标准中采用的一种熵编码机制,结合RDO模式选择技术,可以降低20%的码率.在RDO模式选择过程中,需要对编码块的每一种候选模式进行熵编码以获得编码位数,这在很大程度上增加了视频编码的计算复杂度.为了避免复杂的Lagrange耗费计算,本文第一次提出了一种CABAC熵编码机制下的变换域码率估计方法,基于量化变换系数和运动向量差对熵编码位数进行估计.在此基础上,本文将码率估计技术应用在RDO模式选择中,提出了基于码率估计的快速模式选择算法,减少了模式选择的计算时间.模拟结果显示,本文提出的CABAC熵编码机制下的变换域码率估计技术在对编码性能影响不大的情况下,减少了模式选择中51%的率失真耗费计算时间,采用全搜索运动估计算法时,节省33%的总编码时间. 相似文献
3.
提出一种基于双重截断的JPEG 2000码率控制算法,并给出其VLSI架构。双重截断包括基于熵估计的码率预分配的粗略截断和易于硬件实现的Tier-2快速截断。首先根据各个码块的熵估计值在所有码块熵估计值总和中所占的比例,提前分配码率,实时控制Tier-1编码部分对码流的第一次截断;再利用一种Tier-2快速码流截断法寻找最优截断点,进行码流的第二次精细截断。实验结果表明,通过双重截断所得的压缩图像,与标准PCRD-opt压缩图像的峰值信噪比差值最大不超过0.5 dB。 相似文献
4.
5.
CABAC是H.264/AVC视频压缩标准主要档次中采用的熵编码机制,结合RDO模式选择技术,可以降低20%的编码码率,但是编码器计算复杂度却同时大大增加.对算法进行并行化是有效加快编码速度的方法,但是,由于CABAC具有自适应编码的特点和RDO模式选择对熵编码的使用,使得顺序编码的宏块之间存在着严格的数据相关性,限制了并行编码算法的开发.本文结合基于宏块区域划分的数据级并行编码机制MBRP和码率估计技术,为采用CABAC熵编码机制的H.264编码算法提供了一种高效的并行编码方案:将H.264编码算法划分为模式选择和码流生成两个部分,使之构成典型的生产者-消费者关系;将RDO模式选择中的CABAC替换为码率估计,去除模式选择过程中因CABAC导致的严格数据相关性;对模式选择部分采用MBRP并行机制;码流生成部分由单独的处理器完成,并和模式选择部分实现流水化并行处理.通过4处理器系统模拟器进行实验,发现在保持视频压缩性能几乎不变的情况下,该并行算法的加速比可以达到4.7. 相似文献
6.
7.
8.
9.
本文提出了一种基于MPEG-4的感兴趣区域视频编码新方法.通过对感兴趣区边沿宏块采用强制帧内编码及宏块内预测编码时参考位置的自适应选择,该方法有效地抑制了数据噪声的扩散和传播.通过采用自适应宏块大小的运动估计/补偿算法,提高了感兴趣区特别是图像运动复杂的感兴趣区的编码效率及质量.在码率分配部分,本文方法通过计算不同区域的图像复杂性和能量,依据用户可设定的感兴趣权重因子不等重地分配可用码率资源.实验证明,本文方法较大程度改善了感兴趣区视频编码的压缩效率,提高了码率分配地灵活性和有效性. 相似文献
10.
本文基于ρ域线性率失真模型提出一种适用于H.264的帧级码率控制算法.先验证线性率失真函数对H.264编码算法的适用性,再通过使用混合自适应估计方法精确地估计出该模型惟一的参数θ,实现线性率失真函数在H.264码率控制中的应用.仿真结果表明,与JVT-H017算法相比,新算法不仅获得更精确的码率控制,而且获得更平稳的输出码率和PSNR,同时实现简单,适用于实时视频通信. 相似文献
11.
针对EBCOT中MQ占用大量编码时间和资源,提出了一种基于码率反馈MQ自适应率控制算法。根据小波子带特性自适应地选择Coding Pass进入MQ算术编码器,先进入码流的Coding Pass反馈控制未进入MQ的Coding Pass,查找截断点,舍弃对最终码流无贡献的Coding Pass的码段。从而提升了整个EBCOT编码效率。算法几乎对整个图像压缩质量无影响,同时还大幅度地提高了整个EBCOT的编码效率。试验结果表明,文中算法有效地减少了EBCOT中MQ的计算量和存储量,易于硬件实现。 相似文献
12.
针对3维高性能视频编码(3D-HEVC)中深度图像帧内编码单元(Coding Unit, CU)划分复杂度高的问题,该文提出一种基于角点和彩色图像的自适应快速CU划分算法。首先利用角点算子,并根据量化参数选取一定数目的角点,以此进行CU的预划分;然后联合彩色图像的CU划分对预划分的CU深度级进行调整;最后依据调整后的CU深度级,缩小当前CU的深度级范围。实验结果表明,与原始3D-HEVC的算法相比,该文所提算法平均减少了约63%的编码时间;与只基于彩色图像的算法相比,该文的算法减少了约13%的编码时间,同时降低了约3%的平均比特率,有效地提高了编码效率。 相似文献
13.
Dyer M. Taubman D. Nooshabadi S. Gupta A.K. 《IEEE transactions on circuits and systems. I, Regular papers》2006,53(6):1203-1213
JPEG2000 is a recently standardized image compression algorithm. The heart of this algorithm is the coding scheme known as embedded block coding with optimal truncation (EBCOT). This contributes the majority of processing time to the compression algorithm. The EBCOT scheme consists of a bit-plane coder coupled to a MQ arithmetic coder. Recent bit-plane coder architectures are capable of producing symbols at a higher rate than the existing MQ arithmetic coders can absorb. Thus, there is a requirement for a high throughput MQ arithmetic coder. We examine the existing MQ arithmetic coder architectures and develop novel techniques capable of absorbing the high symbol rate from high performance bit-plane coders, as well as providing flexible design choices. 相似文献
14.
In this paper, an accurate rate model is proposed for inter-frame coding of high-efficiency video coding, which is useful for rate control. The proposed model considers the effect of entropy coding where the inter-symbol dependency is exploited in context- adaptive binary arithmetic coding (CABAC) for saving coded bits. The mutual information is first predicted to measure the reduction of uncertain information in CABAC, and then the conditional entropy is calculated to estimate the output bit-rate of inter-frame residues. Since the source characteristic also significantly impacts on the building of rate model, a joint Laplacian distribution source at the transform unit levels is employed in the proposed rate model. The experimental results show that the proposed model achieves a better rate-distortion performance in rate control. The proposed approach can be also extended to other video codecs using CABAC for the design of rate models. 相似文献
15.
16.
一种改进的相关图像矢量量化编码算法 总被引:2,自引:0,他引:2
本文提出了一种改进的图像矢量量化编码算法。该算法通过采用增加预测块,以及根据相邻块状态进行控制信息的条件熵编码等改进方法,进一步提高了算法的编码效率,为了便于算法实现,本文对图像的边缘引进的一致的处理方法。测试结果表明,相对于无记忆适量量化编码算法,比特率约下降40%,相对于文献[5]算法,比特率约下降15%。 相似文献
17.
Kishor SarawadekarAuthor Vitae Swapna Banerjee Author Vitae 《Integration, the VLSI Journal》2012,45(1):1-8
The embedded block coding with optimized truncation (EBCOT) algorithm is the heart of the JPEG 2000 image compression system. The MQ coder used in this algorithm restricts throughput of the EBCOT because there is very high correlation among all procedures to be performed in it. To overcome this obstacle, a high throughput MQ coder architecture is presented in this paper. To accomplish this, we have studied the number of rotations performed and the rate of byte emission in an image. This study reveals that in an image, on an average 75.03% and 22.72% of time one and two shifts occur, respectively. Similarly, about 5.5% of time two bytes are emitted concurrently. Based on these facts, a new MQ coder architecture is proposed which is capable of consuming one symbol per clock cycle. The throughput of this coder is improved by operating the renormalization and byte out stages concurrently. To reduce the hardware cost, synchronous shifters are used instead of hard shifters. The proposed architecture is implemented on Stratix FPGA and is capable of operating at 145.9 MHz. Memory requirement of the proposed architecture is reduced by a minimum of 66% compared to those of the other existing architectures. Relative figure of merit is computed to compare the overall efficiency of all architectures which show that the proposed architecture provides good balance between the throughput and hardware cost. 相似文献
18.
Rate distortion optimization for H.264 interframe coding: a general framework and algorithms. 总被引:1,自引:0,他引:1
Rate distortion (RD) optimization for H.264 interframe coding with complete baseline decoding compatibility is investigated on a frame basis. Using soft decision quantization (SDQ) rather than the standard hard decision quantization, we first establish a general framework in which motion estimation, quantization, and entropy coding (in H.264) for the current frame can be jointly designed to minimize a true RD cost given previously coded reference frames. We then propose three RD optimization algorithms--a graph-based algorithm for near optimal SDQ in H.264 baseline encoding given motion estimation and quantization step sizes, an algorithm for near optimal residual coding in H.264 baseline encoding given motion estimation, and an iterative overall algorithm to optimize H.264 baseline encoding for each individual frame given previously coded reference frames-with them embedded in the indicated order. The graph-based algorithm for near optimal SDQ is the core; given motion estimation and quantization step sizes, it is guaranteed to perform optimal SDQ if the weak adjacent block dependency utilized in the context adaptive variable length coding of H.264 is ignored for optimization. The proposed algorithms have been implemented based on the reference encoder JM82 of H.264 with complete compatibility to the baseline profile. Experiments show that for a set of typical video testing sequences, the graph-based algorithm for near optimal SDQ, the algorithm for near optimal residual coding, and the overall algorithm achieve on average, 6%, 8%, and 12%, respectively, rate reduction at the same PSNR (ranging from 30 to 38 dB) when compared with the RD optimization method implemented in the H.264 reference software. 相似文献