首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 156 毫秒
1.
韩雪  冯桂  曹海燕 《信号处理》2018,34(6):680-687
编码3D视频的3D-HEVC编码标准采用多视点加深度图的编码格式,新增的深度信息使编码复杂度剧增。本文针对编码块(Coding Unit,CU)的四叉树分割模型和帧内预测模式,提出了深度图帧内编码的快速算法。用Otsu’s算子计算当前CU的最大类间方差值,判断当前CU是否平坦,对平坦CU终止四叉树分割和减少帧内模式的遍历数目。根据子CU与上一层CU的相似性,利用已编码的上一层CU对提前终止CU分割算法做优化。本算法与原始3D-HEVC算法相比减少40.1%的编码时间,而合成视点的质量几乎无变化。   相似文献   

2.
3维高效视频编码(3D-HEVC)标准是最新的3维(3D)视频编码标准,但由于其引入深度图编码技术导致编码复杂度大幅增加。其中,深度图帧内编码单元(CU)的四叉树划分占3D-HEVC编码复杂度的90%以上。对此,在3D-HEVC深度图帧内编码模式下,针对CU四叉树划分复杂度高的问题,该文提出一种基于深度学习的CU划分结构快速预测方案。首先,构建学习深度图CU划分结构信息的数据集;其次,搭建预测CU划分结构的多分支卷积神经网络(MB-CNN)模型,并利用构建的数据集训练MB-CNN模型;最后,将MB-CNN模型嵌入3D-HEVC的测试平台,通过直接预测深度图帧内编码模式下CU的划分结构来降低CU划分复杂度。与标准算法相比,编码复杂度平均降低了37.4%。实验结果表明,在不影响合成视点质量的前提下,该文所提算法有效地降低了3D-HEVC的编码复杂度。  相似文献   

3.
栗晨阳  陈婧 《信号处理》2022,38(10):2180-2191
随着立体及3D视频需求的日益增多,针对3D视频编码方法的研究受到越来越多的关注。3D-HEVC编码标准对采用纹理和深度图格式融合的3D视频进行编码,由于加入了深度图编码,因此新增了深度图编码模式、组件间预测和分段直流编码等技术,使其编码复杂度急剧升高。为了减少3D-HEVC的编码时间,本文提出了针对纹理图和深度图的编码单元(Coding Unit,CU)尺寸提前决策快速算法。利用梯度矩阵和作为当前CU和子CU复杂度的判断依据,将CU分为三类:不划分CU(Non-Split Coding Unit,NSCU)、直接划分CU(Split Coding Unit,SCU)以及普通CU。对NSCU,跳过小尺寸的帧内预测过程;对SCU,直接跳过当前CU的帧内预测过程;对普通CU,执行原平台操作。实验结果表明,与原始平台相比,本文算法在合成视点质量基本不变的情况下,平均减少40.92%的编码时间;与最新的联合纹理-深度图优化的3D-HEVC快速算法相比,可以在质量相当的情况下减少更多的编码时间。  相似文献   

4.
基于高效视频编码的3D视频编码(3D-HEVC)是目前正在研究的新一代3D视频编码标准。为降低3D-HEVC中模式选择的计算复杂度,根据非独立视点纹理图中合并模式采用率高的特点,该文提出了一种3D-HEVC合并模式快速判决方法。在B帧中,分析了当前编码单元(CU)与视点方向参考帧中参考块间编码模式的相关性;在P帧中,分析了位于相邻划分深度的CU间编码模式的相关性。根据分析的视点间和划分深度间的相关性设计快速判决条件,预判采用合并/合并-跳过模式编码的CU,判别出的CU在模式选择过程中只检查相关的候选预测模式,从而降低计算复杂度。实验结果表明,与3D-HEVC原始算法相比,该文算法能够在率失真性能损失很小的前提下,平均节省11.2%的总编码时间和25.4%的非独立视点纹理图的编码时间。   相似文献   

5.
基于多视点视频序列视点内、视点间存在的相关性,并结合视点间运动矢量共享技术,该文提出一种面向3维高效视频编码中深度序列传输丢包的错误隐藏算法。首先,根据3D高效视频编码(3D-HEVC)的分层B帧预测(HBP)结构和深度图纹理特征,将深度图丢失块分成运动块和静止块;然后,对于受损运动块,使用结合纹理结构的外边界匹配准则来选择相对最优的运动/视差矢量进行基于位移矢量补偿的错误掩盖,而对受损静止块采用参考帧直接拷贝进行快速错误隐藏;最后,使用参考帧拆分重组来获取新的运动/视差补偿块对修复质量较差的重建块进行质量提升。实验结果表明:相较于近年提出的对比算法,该文算法隐藏后的深度帧平均峰值信噪比(PSNR)能提升0.25~2.03 dB,结构相似度测量值(SSIM)能提升0.001~0.006,且修复区域的主观视觉质量与原始深度图更接近。  相似文献   

6.
针对3D-HEVC中深度图编码采用的视点合成失真优化方法的高复杂度问题,提出一种基于纹理平滑度的快速算法。首先结合帧内DC预测特性和统计学方法分析平坦纹理图中像素规律并设定基于纹理图平坦度的跳过准则;然后在深度图编码采用视点合成失真优化方法时提前分离出纹理图平坦区域所对应的深度图区域,并终止该区域像素基于虚拟视点合成的视点合成失真计算过程。实验结果证明该算法的有效性,能在保持编码质量的同时减少大量编码时间。  相似文献   

7.
多视点彩色加深度(MVD)视频是三维(3D)视频的 主流格式。在3D高效视频编码中,深度视频帧内编码 具有较高的编码复杂度;深度估计软件获取的深度视频由于不够准确会使深度图平坦 区域纹理增加, 从而进一步增加帧内编码复杂度。针对以上问题,本文提出了一种联合深度处理的深度视频 帧内低复杂度 编码算法。首先,在编码前对深度视频进行预处理,减少由于深度图不准确而出现的纹理信 息;其次,运 用反向传播神经网络(BPNN,backpropagation neural network)预测最大编码单元 (LCU,la rgest coding unit)的最大划分深度;最后联合深度视频的边缘信 息及对应的彩色LCU最大划分深度进行CU提前终止划分和快速模式选取。实验结果表明, 本文算法在保证 虚拟视点质量的前提下,BDBR下降0.33% ,深度视频编码时间平均节省50.63%。  相似文献   

8.
《信息技术》2016,(10):205-208
在基于多视点加深度(MVD)格式的视频编码方案中,深度视频的编码性能直接影响最终绘制的虚拟视点的质量。对于具有边界的深度块而言,传统的帧内预测和帧间预测模式仍存在一定的提升空间。因此,文中提出一种基于帧内帧间联合预测的深度视频编码方法。该方法首先获取当前深度块的最优帧内预测模式和最优帧间预测模式。然后,将这两种模式应用于边界深度块的不同区域。最后,自适应地调整预测结果的加权系数,实现联合预测。实验结果表明,相对于3D-HEVC平台的传统预测模式,本方法实现了更好的编码性能。  相似文献   

9.
三维高效视频编码在产生了高效的编码效率的同时也是以大量的计算复杂性作为代价的。因此为了降低计算的复杂度,本文提出了一种基于深度学习网络的边缘检测的3D-HEVC深度图帧内预测快速算法。算法中首先使用整体嵌套边缘检测网络对深度图进行边缘检测,而后使用最大类间方差法将得到的概率边缘图进行二值化处理,得到显著性的边缘区域。最后针对处于不同区域的不同尺寸的预测单元,设计了不同的优化方法,通过跳过深度建模模式和其他某些不必要的模式来降低深度图帧内预测的模式选择的复杂度,最终达到减少深度图的编码复杂度的目的。经过实验仿真的验证,本文提出的算法与原始的编码器算法相比,平均总编码时间可减少35%左右,且深度图编码时间平均大约可减少42%,而合成视点的平均比特率仅增加了0.11%。即本文算法在可忽略的质量损失下,达到降低编码时间的目的。  相似文献   

10.
3D-HEVC是为了满足3D视频和自由视点视频的高效编码而最新制定的视频编码标准,它要求同时编码几个视点的纹理视频和深度图.完全采用传统的技术来编码深度图会使得深度图内部锐利边界处产生伪影效应,为此,一些新的针对于深度图的编码工具被开发.详细介绍了这些编码工具,同时介绍了编码深度图时所使用的率失真优化方法.  相似文献   

11.
To reduce the computational complexity of screen content video coding (SCC), a fast algorithm based on gray level co-occurrence matrix and Gabor feature model for HEVC-SCC, denoted as GGM, is proposed in this paper. By studying the correlation of non-zero number in gray level co-occurrence matrix with different partitioning depth, the coding unit (CU) size of intra coding can be prejudged, which selectively skips the intra prediction process of CU in other depth. With Gabor filter, the edge information reflecting the features of screen content images to the human visual system (HVS) are extracted. According to Gabor feature, CUs are classified into natural content CUs (NCCUs), smooth screen content CUs (SSCUs) and complex screen content CUs (CSCUs), with which, the calculation and judgment of unnecessary intra prediction modes are skipped. Under all-intra (AI) configuration, experimental results show that the proposed algorithm GGM can achieve encoding time saving by 42.13% compared with SCM-8.3, and with only 1.85% bit-rate increasement.  相似文献   

12.
与彩色视频用来直接显示不同,在三维视频系统中深度视频序列的作用是在绘制虚拟视点时提供所需的几何信息,所以直接将现有编码算法应用于深度图像存在一定的局限性.针对深度视频序列的作用以及深度图像自身特征,提出了一种面向绘制质量的深度图像快速帧内编码方法,该方法包括基于深度图像统计特性和空域相关性的图像区域划分算法、HEVC的快速编码单元(CU)和预测块(PB)决策算法和帧内编码模式预先选择算法.实验结果表明:与直接利用HEVC测试软件编码深度视频相比,该快速算法在保证几乎相同的主客观绘制质量的前提下,每个视点的编码速度平均提升了35%以上.  相似文献   

13.
针对3维高性能视频编码(3D-HEVC)中深度图像帧内编码单元(Coding Unit, CU)划分复杂度高的问题,该文提出一种基于角点和彩色图像的自适应快速CU划分算法。首先利用角点算子,并根据量化参数选取一定数目的角点,以此进行CU的预划分;然后联合彩色图像的CU划分对预划分的CU深度级进行调整;最后依据调整后的CU深度级,缩小当前CU的深度级范围。实验结果表明,与原始3D-HEVC的算法相比,该文所提算法平均减少了约63%的编码时间;与只基于彩色图像的算法相比,该文的算法减少了约13%的编码时间,同时降低了约3%的平均比特率,有效地提高了编码效率。  相似文献   

14.
为减少HEVC屏幕内容编码的编码时间,提高编码 效率,本文提出了一种基于决策树的HEVC屏幕内容帧内编码快速 CU划分和简单PU模式选择的算法。对视频序列特性分析,提取有效的特征值,生成决策树模 型。使用方差、梯度信息熵和 像素种类数用于生成CU划分决策树,使用平均非零梯度、像素信息熵等用于生成PU模式分类 决策树。在一定深度的决策 树模型中,通过对相应深度的CU的特征值的计算快速决策当前CU的划分与PU模式的类型。这 种利用决策树做判决的算法 通过减少CU深度和PU的模式遍历而降低编码复杂度,达到快速帧内编码的效果。实验结果表 明,与HEVC屏幕内容的标 准算法相比,该算法在峰值信噪比(PSNR)平均下降0. 05 dB和码率 平均增加1.15%的情况下,能平均减少30.81% 的编码时间。  相似文献   

15.
As an extension of the High Efficiency Video Coding (HEVC) standard, 3D-HEVC requires to encode multiple texture views and depth maps, which inherits the same quad-tree coding structure as HEVC. Due to the distinct properties of texture views and depth maps, existing fast intra prediction approaches were presented for the coding of texture views and depth maps, respectively. To further reduce the coding complexity of 3D-HEVC, a self-learning residual model-based fast coding unit (CU) size decision approach is proposed for the intra coding of both texture views and depth maps. Residual signal, which is defined as the difference between the original luminance pixel and the optimal prediction luminance pixel, is firstly extracted from each CU. Since residue signal is strongly correlated with the optimal CU partition, it is used as the feature of each CU. Then, a self-learning residual model is established by intra feature learning, which iteratively learns the features of the previously encoded coding tree unit (CTU) generated by itself. Finally, a binary classifier is developed with the self-learning residual model to early terminate CU size decision of both texture views and depth maps. Experimental results show the proposed fast intra CU size decision approach achieves 33.3% and 49.3% encoding time reduction on average for texture views and depth maps with negligible loss of overall video quality, respectively.  相似文献   

16.
High Efficiency Video Coding (HEVC) is a new video coding standard achieving about a 50% bit rate reduction compared to the popular H.264/AVC High Profile with the same subjective reproduced video quality. Better coding efficiency is attained, however, at the cost of significantly increased encoding complexity. Therefore, fast encoding algorithms with little loss in coding efficiency is necessary for HEVC to be successfully adopted for real applications. In this paper we propose a fast encoding technique applicable to HEVC all intra encoding. The proposed fast encoding technique consists of coding unit (CU) search depth prediction, early CU splitting termination, and fast mode decision. In CU search depth prediction, the depth of encoded CU in the current coding tree unit (CTU) is limited to predicted range, which is mostly narrower than the full depth range. Early CU splitting skips mode search of its sub-CUs when rate distortion (RD) cost of current CU is below the estimated RD cost at the current CU depth. RD cost and encoded CU depth distribution of the collocated CTU of the previous frame are utilized both to predict the encoding CU depth search range and to estimate the RD cost for CU splitting termination. Fast mode decision reduces the number of candidate modes for full rate distortion optimized quantization on the basis of the low complexity costs computed in the preceding rough mode decision step. When all these three methods are applied, proposed fast HEVC intra encoding technique reduces the encoding time of the reference encoder by 57% on the average, with only 0.6% of coding efficiency loss in terms of Bjontegaard delta (BD) rate increase under the HEVC common test conditions.  相似文献   

17.
In the literatures, the designs of H.264 to High Efficiency Video Coding (HEVC) transcoders mostly focus on inter transcoding. In this paper, a fast intra transcoding system from H.264 to HEVC based on discrete cosine transform (DCT) coefficients and intra prediction modes, called FITD, is proposed by using the intra information retrieved from an H.264 decoder for transcoding. To design effective transcoding strategies, FITD not only refers block size of intra prediction and intra prediction modes, but also effectively uses the DCT coefficients to help a transcoder to predict the complexity of the blocks. We successfully use DCT coefficients as well as intra prediction information embedded in H.264 bitstreams to predict the coding depth map for depth limitation and early termination to simplify HEVC re-encoding process. After a HEVC encoder gets the prediction of a certain CU size from depth map, if it reaches the predicted depth, the HEVC encoder will stop the next CU branch. As a result, the numbers of CU branches and predictions in HEVC re-encoder will be substantially reduced to achieve fast and precise intra transcoding. The experimental results show that the FITD is 1.7–2.5 times faster than the original HEVC in encoding intra frames, while the bitrate is only increased to 3% or less and the PSNR degradation is also controlled within 0.1 dB. Compared to the previous H.264 to HEVC transcoding approaches, FITD clearly maintains the better trade-off between re-encoding speed and video quality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号