首页 | 本学科首页   官方微博 | 高级检索  
     


Real-time facial animation on mobile devices
Affiliation:1. Department of Computer Science and Engineering Ambassador, McDonnell International Scholar Academy Washington, University in St. Louis, One Brookings Drive, St. Louis, MO 63130-4899, USA;2. Information Technology, Computer Science Department, Zhejiang University, 38 Zheda Rd, Xihu, Hangzhou, Zhejiang, China 310027
Abstract:We present a performance-based facial animation system capable of running on mobile devices at real-time frame rates. A key component of our system is a novel regression algorithm that accurately infers the facial motion parameters from 2D video frames of an ordinary web camera. Compared with the state-of-the-art facial shape regression algorithm [1], which takes a two-step procedure to track facial animations (i.e., first regressing the 3D positions of facial landmarks, and then computing the head poses and expression coefficients), we directly regress the head poses and expression coefficients. This one-step approach greatly reduces the dimension of the regression target and significantly improves the tracking performance while preserving the tracking accuracy. We further propose to collect the training images of the user under different lighting environments, and make use of the data to learn a user-specific regressor, which can robustly handle lighting changes that frequently occur when using mobile devices.
Keywords:Video tracking  3D avatars  Facial performance  User-specific blendshapes  Shape regression
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号