首页 | 本学科首页   官方微博 | 高级检索  
     


Expression transfer for facial sketch animation
Authors:Yang Yang  Nanning ZhengYuehu Liu  Shaoyi DuYuanqi Su  Yoshifumi Nishio
Affiliation:a The Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
b The Department of Electrical and Electronic Engineering, The University of Tokushima, Tokushima 770-8506, Japan
Abstract:This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.
Keywords:Facial sketch  NET model  Hierarchical animation
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号