首页 | 本学科首页   官方微博 | 高级检索  
     


A video prediction approach for animating single face image
Authors:Zhao  Yong  Oveneke  Meshia Cédric  Jiang  Dongmei  Sahli  Hichem
Affiliation:1.NPU-VUB joint AVSP Research Lab, School of Computer Science, Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi’an, 710072, People’s Republic of China
;2.VUB-NPU joint AVSP Research Lab, Department of Electronics & Informatics (ETRO), Vrije Universiteit Brussel (VUB), Pleinlaan 2, 1050, Brussels, Belgium
;3.Interuniversity Microelectronics Centre (IMEC), Kapeldreef 75, 3001, Heverlee, Belgium
;
Abstract:

Generating dynamic 2D image-based facial expressions is a challenging task for facial animation. Much research work focused on performance-driven facial animation from given videos or images of a target face, while animating a single face image driven by emotion labels is a less explored problem. In this work, we treat the task of animating single face image from emotion labels as a conditional video prediction problem, and propose a novel framework by combining factored conditional restricted boltzmann machines (FCRBM) and reconstruction contractive auto-encoder (RCAE). A modified RCAE with an associated efficient training strategy is used to extract low dimensional features and reconstruct face images. FCRBM is used as animator to predict facial expression sequence in the feature space given discrete emotion labels and a frontal neutral face image as input. Both quantitative and qualitative evaluations on two facial expression databases, and comparison to state-of-the-art showed the effectiveness of our proposed framework for animating frontal neutral face image from given emotion labels.

Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号