首页 | 本学科首页   官方微博 | 高级检索  
     


Refilming with Depth-Inferred Videos
Authors:Guofeng Zhang Zilong Dong Jiaya Jia Liang Wan Tien-Tsin Wong Hujun Bao
Affiliation:State Key Lab. of CAD&CG, Zhejiang Univ., Hangzhou, China;
Abstract:Compared to still image editing, content-based video editing faces the additional challenges of maintaining the spatiotemporal consistency with respect to geometry. This brings up difficulties of seamlessly modifying video content, for instance, inserting or removing an object. In this paper, we present a new video editing system for creating spatiotemporally consistent and visually appealing refilming effects. Unlike the typical filming practice, our system requires no labor-intensive construction of 3D models/surfaces mimicking the real scene. Instead, it is based on an unsupervised inference of view-dependent depth maps for all video frames. We provide interactive tools requiring only a small amount of user input to perform elementary video content editing, such as separating video layers, completing background scene, and extracting moving objects. These tools can be utilized to produce a variety of visual effects in our system, including but not limited to video composition, "predatorrdquo effect, bullet-time, depth-of-field, and fog synthesis. Some of the effects can be achieved in real time.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号