首页 | 本学科首页   官方微博 | 高级检索  
     


Data-driven facial expression synthesis via Laplacian deformation
Authors:Xianmei Wan  Xiaogang Jin
Affiliation:1. State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310027, People??s Republic of China
2. Zhejiang University of Finance & Economics, Hangzhou, 310018, People??s Republic of China
Abstract:Realistic talking heads have important use in interactive multimedia applications. This paper presents a novel framework to synthesize realistic facial animations driven by motion capture data using Laplacian deformation. We first capture the facial expression from a performer, then decompose the motion data into two components: the rigid movement of the head and the change of the facial expression. By making use of the local-detail preserving property of the Laplacian coordinate, we clone the captured facial expression onto a neutral 3D facial model using Laplacian deformation. We choose some expression “independent points” in the facial model as the fixed points when solving the Laplacian deformation equations. Experimental results show that our approach can synthesize realistic facial expressions in real time while preserving the facial details. We compare our method with the state-of-the-art facial expression synthesis methods to verify the advantages of our method. Our approach can be applied in real-time multimedia systems.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号