首页 | 本学科首页   官方微博 | 高级检索  
     


Pose robust face tracking by combining view-based AAMs and temporal filters
Authors:Chen Huang  Xiaoqing Ding  Chi Fang
Affiliation:1. School of Physics and Nuclear Energy Engineering, Beihang University, Beijing 100191, PR China;2. Institute of Heavy Ion Physics, Peking University, Beijing 100871, PR China;1. Southwestern Institute of Physics, P.O. Box 432, Chengdu 610041, China;2. College of Physical Science and Technology, Sichuan University, Chengdu 610065, China;1. Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, and School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China;2. Guangdong-Hong Kong Joint Laboratory of Quantum Matter, South China Normal University, Guangzhou 510006, China;3. Department of Physics, Hangzhou Normal University, Hangzhou 311121, China;1. Key Laboratory of Middle Atmosphere and Global Environment Observation, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China;2. University of Chinese Academy of Sciences, Beijing 100049, China;3. Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science and Technology, Nanjing, Jiangsu 210044, China;4. Meteorological Observation Centre, China Meteorological Administration, Beijing 100081, China;5. Atmospheric Observation Center of Shandong Meteorological Bureau, Jinan, Shandong 250031, China
Abstract:Active appearance models (AAMs) are useful for face tracking for the advantages of detailed face interpretation, accurate alignment and high efficiency. However, they are sensitive to initial parameters and may easily be stuck in local minima due to the gradient-descent optimization, which makes the AAM based face tracker unstable in the presence of large pose deviation and fast motion. In this paper, we propose to combine the view-based AAMs with two novel temporal filters to overcome the limitations. First, we build a new view space based on the shape parameters of AAMs, instead of the model parameters controlling both the shape and appearance, for the purpose of pose estimation. Then the Kalman filter is used to simultaneously update the pose and shape parameters for a better fitting of each frame. Second, we propose a temporal matching filter which is twofold. The inter-frame local appearance constraint is incorporated into AAM fitting, where the mechanism of the active shape model (ASM) is also implemented in a unified framework to find more accurate matching points. Moreover, we propose to initialize the shape with correspondences found by a random forest based local feature matching. By introducing the local information and temporal correspondences, the twofold temporal matching filter improves the tracking stability when confronted with fast appearance changes. Experimental results show that our algorithm is more pose robust than basic AAMs and some state-of-art AAM based methods, and that it can also handle large expressions and non-extreme illumination changes in test video sequences.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号