首页 | 本学科首页   官方微博 | 高级检索  
     

基于多模态输入的对抗式视频生成方法
引用本文:于海涛, 杨小汕, 徐常胜. 基于多模态输入的对抗式视频生成方法[J]. 计算机研究与发展, 2020, 57(7): 1522-1530. DOI: 10.7544/issn1000-1239.2020.20190479
作者姓名:于海涛  杨小汕  徐常胜
作者单位:1.1(合肥工业大学计算机与信息学院 合肥 230031);2.2(模式识别国家重点实验室(中国科学院自动化研究所) 北京 100190) (yuht@mail.hfut.edu.cn)
基金项目:国家重点实验室自主课题;国家重点研发计划;国家自然科学基金
摘    要:视频生成是计算机视觉和多媒体领域一个重要而又具有挑战性的任务.现有的基于对抗生成网络的视频生成方法通常缺乏一种有效可控的连贯视频生成方式.提出一种新的多模态条件式视频生成模型.该模型使用图片和文本作为输入,通过文本特征编码网络和运动特征解码网络得到视频的运动信息,并结合输入图片生成连贯的运动视频序列.此外,该方法通过对输入图片进行仿射变换来预测视频帧,使得生成模型更加可控、生成结果更加鲁棒.在SBMG(single-digit bouncing MNIST gifs),TBMG(two-digit bouncing MNIST gifs)和KTH(kungliga tekniska hgskolan human actions)数据集上的实验结果表明:相较于现有的视频生成方法,生成结果在目标清晰度和视频连贯性方面都具有更好的效果.另外定性评估和定量评估(SSIM(structural similarity index)与PSNR(peak signal to noise ratio)指标)表明提出的多模态视频帧生成网络在视频生成中起到了关键作用.

关 键 词:深度学习  视频生成  视频预测  卷积神经网络  生成对抗网络

Antagonistic Video Generation Method Based on Multimodal Input
Yu Haitao, Yang Xiaoshan, Xu Changsheng. Antagonistic Video Generation Method Based on Multimodal Input[J]. Journal of Computer Research and Development, 2020, 57(7): 1522-1530. DOI: 10.7544/issn1000-1239.2020.20190479
Authors:Yu Haitao  Yang Xiaoshan  Xu Changsheng
Affiliation:1.1(School of Computer and Information, Hefei University of Technology, Hefei 230031);2.2(National Laboratory of Pattern Recognition(Institute of Automation, Chinese Academy of Sciences), Bejing 100190)
Abstract:Video generation is an important and challenging task in the field of computer vision and multimedia. The existing video generation methods based on generative adversarial networks (GANs) usually lack an effective scheme to control the coherence of video. The realization of artificial intelligence algorithms that can automatically generate real video is an important indicator of more complete visual appearance information and motion information understanding.A new multi-modal conditional video generation model is proposed in this paper. The model uses pictures and text as input, gets the motion information of video through text feature coding network and motion feature decoding network, and generates video with coherence motion by combining the input images. In addition, the method predicts video frames by affine transformation of input images, which makes the generated model more controllable and the generated results more robust. The experimental results on SBMG (single-digit bouncing MNIST gifs), TBMG(two-digit bouncing MNIST gifs) and KTH(kungliga tekniska hgskolan human actions) datasets show that the proposed method performs better on both the target clarity and the video coherence than existing methods. In addition, qualitative evaluation and quantitative evaluation of SSIM(structural similarity index) and PSNR(peak signal to noise ratio) metrics demonstrate that the proposed multi-modal video frame generation network plays a key role in the generation process.
Keywords:deep learning  video generation  video prediction  convolutional neural network  generative adversarial network (GAN)
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机研究与发展》浏览原始摘要信息
点击此处可从《计算机研究与发展》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号