首页 | 本学科首页   官方微博 | 高级检索  
     

深度图时域一致性增强
引用本文:左一帆,安平,马然,沈礼权,张兆杨.深度图时域一致性增强[J].光电子.激光,2014(1):172-177.
作者姓名:左一帆  安平  马然  沈礼权  张兆杨
作者单位:上海大学 通信与信息工程学院,新型显示技术与应用集成教育部重点实 验室,上海 200072;上海大学 通信与信息工程学院,新型显示技术与应用集成教育部重点实 验室,上海 200072;上海大学 通信与信息工程学院,新型显示技术与应用集成教育部重点实 验室,上海 200072;上海大学 通信与信息工程学院,新型显示技术与应用集成教育部重点实 验室,上海 200072;上海大学 通信与信息工程学院,新型显示技术与应用集成教育部重点实 验室,上海 200072
基金项目:国家自然科学基金(61172096,4)和上海市科委重点(12DZ2293500,12dz1500401)资助项目 (上海大学 通信与信息工程学院,新型显示技术与应用集成教育部重点实验室,上海 200072)
摘    要:在自由视点电视(FTV)系统的发送端,数据由多摄 像机采集的纹理图和其相应的深度信息组成;在接收端,虚拟视点由视点纹理序列和估计的 深度信息经过3D变换绘制。因此,获取高质量的深度信息是FTV系统的一个重 要部分。由于当前非交互方式深度估计方法是逐帧进行的,所得到的深度图序列往往缺乏时 域一致性。理 想情况下相邻帧静止区域的深度值应该相同,但是对这些区域深度值的估计结果往往不同, 这将严重影 响编码效率和绘制质量。由于深度图表征的是纹理图中相应场景离摄像机的距离,所以可以 通过对纹理图 的有效分析,判断出错误的深度值。通过对深度值可靠性和当前区域运动属性的判断,提出 一种基于 自适应时域加权的深度图一致性增强等。实验表明,本文算法能有效抑制静止区域深度值 不连续的错误,产生 更加稳定的深度图序列,使虚拟视点的时域绘制质量得到增强,同时编码效率得到提高。

关 键 词:3DTV    时域一致性    视点合成
收稿时间:2013/5/17 0:00:00

Temporal consistency enhancement on depth sequences
Affiliation:Key Laboratory of Advanced Displays and System Application,Ministry of Educatio n,School of Communication and Information Engineering,Shanghai University,Shangh ai 200072,China;Key Laboratory of Advanced Displays and System Application,Ministry of Educatio n,School of Communication and Information Engineering,Shanghai University,Shangh ai 200072,China;Key Laboratory of Advanced Displays and System Application,Ministry of Educatio n,School of Communication and Information Engineering,Shanghai University,Shangh ai 200072,China;Key Laboratory of Advanced Displays and System Application,Ministry of Educatio n,School of Communication and Information Engineering,Shanghai University,Shangh ai 200072,China;Key Laboratory of Advanced Displays and System Application,Ministry of Educatio n,School of Communication and Information Engineering,Shanghai University,Shangh ai 200072,China
Abstract:The data obtained at the sender of the free viewpoint TV (FTV) system consist of the view sequences captured by severa l cameras and the co rresponding estimated depth data.At the receiver,the virtual view is synthesiz ed by using the 3D warping technique based on the view sequences and the estimated depth data.T herefore,how to obtain the depth data with high quality is one of the key issues for FTV system.Currently, depth sequences generated by automatic depth estimation suffer from the temporal inconsistency problem.Ideal ly,depth values of some static objects remain the same in adjacent frames,but they are often estimated differe ntly.These temporal depth errors significantly degrade the visual quality of the synthesized virtual view as well as the coding efficiency of the depth sequences.Since depth sequences correspond to texture sequences,some erroneous t emporal depth value can be detected by analyzing corresponding texture sequences.Through the judgment of r eliabilities of the depth values and the motion properties of the current areas,this paper proposes a novel solu tion to enhance the temporal consistency of depth sequences by applying adaptive temporal filtering on them. Experimented results demonstrate that the proposed depth enhancement algorithm can effectively suppress the transient dept h errors and generate more stable depth sequences.On this basis,the temporal rendering quality of the virtual view point is improved,meanwhile the coding efficiency is enhanced.
Keywords:3DTV  temporal consistency  view synthesis
本文献已被 CNKI 等数据库收录!
点击此处可从《光电子.激光》浏览原始摘要信息
点击此处可从《光电子.激光》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号