共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
在H.264视频编码过程中,编码时间受诸多因素的影响,如帧间/帧内模式选择、运动估计(ME)、率失真优化(RDO)等。为了以较快速度和较好质量进行编码,针对H.264帧内模式选择,提出了一种适用于H.264帧内4×4块预测的模式选择快速算法。该算法利用帧内4×4块最优预测模式与和它相邻的预测模式之间率失真代价(RD Cost)的高相关性,以及绝对变换误差和(SATD)与率失真(RD)性能之间的强相关性,有效地跳过一些不太可能的预测模式,从而使帧内4×4块模式选择过程只需进行4次率失真代价计算即可。实验结果显示,该算法在编码性能和编码速度之间取得了很好的折衷。 相似文献
3.
4.
在H.264视频编码过程中,编码时间受诸多因素的影响,如帧间/帧内模式选择、运动估计(ME)、率失真优化(RDO)等。为了以较快速度和较好质量进行编码,针对H.264帧内模式选择,提出了一种适用于H.264帧内4×4块预测的模式选择快速算法。该算法利用帧内4×4块最优预测模式与和它相邻的预测模式之间率失真代价(RDCost)的高相关性,以及绝对变换误差和(SATD)与率失真(RD)性能之间的强相关性,有效地跳过一些不太可能的预测模式,从而使帧内4×4块模式选择过程只需进行4次率失真代价计算即可。实验结果显示,该算法在编码性能和编码速度之间取得了很好的折衷。 相似文献
5.
H.264/AVC采用先进的帧内预测技术,进一步提高了编码效率,但计算复杂度也随之增高。提出了一种快速的帧内预测算法。该算法充分利用宏块的MAD信息及时间/空间相关性,对宏块进行预判,在帧内4×4(I4)和帧内16×16(I16)预测模式之间进行选择;然后,针对I4预测,计算当前块与参考块之间的相似程度D(bx,br),据此判断是否跳过当前块的模式选择过程,从而减少了编码时间。实验表明,与JM10.2相比,针对全I帧测试:编码时间平均减少约59.03%;针对IPPP测试:编码时间平均减少22.76%;SNRY和码率基本不变。并且,该算法有利于在硬件平台(FPGA)上实现,可用于实际的视频通信产品。 相似文献
6.
针对AVS视频编码算法降低计算量问题,结合相邻块最佳模式相关性和全零块检测算法,提出一种快速AVS帧内预测模式选择算法。该算法根据相邻宏块的预测模式预先判别待编码宏块的预测模式,若预测模式符合提前中止阈值则停止对其他模式的搜索。通过大量实验得到了对各种图像序列普遍适用的中止阈值经验公式。推导出仅与量化参数QP有关的最佳8×8全零块判决阈值,在模式搜索过程中若预测模式满足全零块判决阈值,就停止对剩余模式的搜索。实验结果表明,在编码效率相当情况下,帧内预测搜索时间减少了15.81%~40.76%。 相似文献
7.
提出一种基于边缘方向预测模式度量的快速预测模式选择算法,通过对当前4×4 块帧内预测方向的度量,根据估计的块边缘方向来选择候选预测模式。实验结果表明,与全搜索算法相比,该算法在保证图像质量和比特率基本不变的前提下,编码时间平均减少了40%,提高了编码效率。 相似文献
8.
针对基于视频帧内预测模式调制的信息隐藏算法嵌入容量较小、比特率上升较明显等问题,提出一种基于菱形编码的帧内视频信息隐藏算法。该算法基于高效视频编码(HEVC),将相邻两个4×4块预测模式组成模式对,采用改进的菱形编码算法指导模式调制和信息嵌入过程;并采取二次编码方式在保留原始平台最优编码划分下进行第二次隐秘信息嵌入编码,在保证嵌入量的同时抑制帧内失真漂移。实验结果表明:所提算法峰值信噪比(PSNR)值下降在0.03dB以内,码率增长低于0.53%,嵌入量有大幅提升,并能很好地保证视频主客观质量。 相似文献
9.
H.264/AVC帧内预测模式选择算法研究 总被引:11,自引:0,他引:11
H.264/AVC采用空间域上的帧内预测技术,进一步提高了编码效率,但由于H.264/AVC支持的帧内预测模式数较多,使预测的复杂度大幅度增加。详细分析了帧内预测模式的选择过程,提出一种快速的率失真优化(RDO)模式下的快速Intra_4×4模式选择算法,该算法根据SATD(SumofAbsoluteTransformedDifference)以及相邻块的预测模式之间的相关性等特征,预先排除了超过65%可能性小的Intra_4×4模式,避免了不必要的计算,从而大幅度降低帧内预测的复杂度,同时基本保持了H.264/AVC的编码性能。 相似文献
10.
H.264/AVC帧内4×4块预测模式选择算法的研究 总被引:1,自引:0,他引:1
在分析H.264帧内预测技术的基础上,提出了一种快速的帧内4×4块预测模式选择算法.该算法计算当前4×4块与空间/时间相邻4×4块之间的相似程度,判断是否跳过其模式选择过程;对未跳过的4×4块,根据变换系数的绝对差值和(SATD)信息及空间/时间相关性,排除一些可能性小的预测模式.实验表明,与JM10.2相比,对于IIII及IPPP编码结构:编码时间平均减少了44.07%和20.49%,码率和Y分量的信号噪声比(SNRY)基本不变. 相似文献
11.
Intelligent Selection of Instances for Prediction Functions in Lazy Learning Algorithms 总被引:2,自引:0,他引:2
Lazy learning methods for function prediction use different prediction functions. Given a set of stored instances, a similarity measure, and a novel instance, a prediction function determines the value of the novel instance. A prediction function consists of three components: a positive integer k specifying the number of instances to be selected, a method for selecting the k instances, and a method for calculating the value of the novel instance given the k selected instances. This paper introduces a novel method called k surrounding neighbor (k-SN) for intelligently selecting instances and describes a simple k-SN algorithm. Unlike k nearest neighbor (k-NN), k-SN selects k instances that surround the novel instance. We empirically compared k-SN with k-NN using the linearly weighted average and local weighted regression methods. The experimental results show that k-SN outperforms k-NN with linearly weighted average and performs slightly better than k-NN with local weighted regression for the selected datasets. 相似文献
12.
由于激光质谱系统逻辑结构复杂多样,激光输出功率是激光质谱系统进行的关键条件之一,提前掌握激光输出功率未来状态的发展趋势可为激光质谱系统运行决策提供重要依据,因此进行激光质谱系统激光输出功率的预测技术研究非常必要。采用M5预测模型、线性回归模型、向量机模型对质谱系统的激光输出功率历史数据进行了建模及预测分析,通过比较几个预测模型的预测误差及平均误差,结果表明M5预测模型的预测结果相对最优。通过对激光输出功率历史数据的分析及预测,确定了激光质谱系统激光输出功率的研究预测模型。 相似文献
13.
浅析美国的SPSS在中国人口普查与预测中的洋为中用 总被引:3,自引:0,他引:3
根据当前人口普查与预测的关键时期,通过“人口增长方面的例子”,介绍了如何在SPSS中建立人口预测的模型,如何对预测的结果作出科学的分析,指出了SPSS技术的应用前景。 相似文献
14.
15.
16.
17.
《Information and Software Technology》2013,55(11):1981-1993
ContextEffort-aware models, e.g., effort-aware bug prediction models aim to help practitioners identify and prioritize buggy software locations according to the effort involved with fixing the bugs. Since the effort of current bugs is not yet known and the effort of past bugs is typically not explicitly recorded, effort-aware bug prediction models are forced to use approximations, such as the number of lines of code (LOC) of the predicted files.ObjectiveAlthough the choice of these approximations is critical for the performance of the prediction models, there is no empirical evidence on whether LOC is actually a good approximation. Therefore, in this paper, we investigate the question: is LOC a good measure of effort for use in effort-aware models?MethodWe perform an empirical study on four open source projects, for which we obtain explicitly-recorded effort data, and compare the use of LOC to various complexity, size and churn metrics as measures of effort.ResultsWe find that using a combination of complexity, size and churn metrics are a better measure of effort than using LOC alone. Furthermore, we examine the impact of our findings on previous effort-aware bug prediction work and find that using LOC as a measure for effort does not significantly affect the list of files being flagged, however, using LOC under-estimates the amount of effort required compared to our best effort predictor by approximately 66%.ConclusionStudies using effort-aware models should not assume that LOC is a good measure of effort. For the case of effort-aware bug prediction, using LOC provides results that are similar to combining complexity, churn, size and LOC as a proxy for effort when prioritizing the most risky files. However, we find that for the purpose of effort-estimation, using LOC may under-estimate the amount of effort required. 相似文献
18.
19.
20.
This paper is concerned with the problem of on-line prediction in the situation where some data are unlabelled and can never be used for prediction, and even when the data are labelled, the labels may arrive with a delay. We construct a modification of randomised transductive confidence machine for this case and prove a necessary and sufficient condition for its predictions being calibrated, in the sense that in the long run they are wrong with a prespecified probability under the assumption that the data are generated independently by the same distribution. The condition for calibration turns out to be very weak: feedback should be given on more than a logarithmic fraction of steps. 相似文献