首页 | 本学科首页   官方微博 | 高级检索  
     


Multi-scale patch-based sparse appearance model for robust object tracking
Authors:Chengjun Xie  Jieqing Tan  Peng Chen  Jie Zhang  Lei He
Affiliation:1. School of Computer and Information, Hefei University of Technology, Hefei, 230009, China
2. Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei, 230031, China
3. Institute of Health Sciences, Anhui University, Hefei, 230601, Anhui, China
Abstract:When objects undergo large pose change, illumination variation or partial occlusion, most existing visual tracking algorithms tend to drift away from targets and even fail to track them. To address the issue, in this paper we propose a multi-scale patch-based appearance model with sparse representation and provide an efficient scheme involving the collaboration between multi-scale patches encoded by sparse coefficients. The key idea of our method is to model the appearance of an object by different scale patches, which are represented by sparse coefficients with different scale dictionaries. The model exploits both partial and spatial information of targets based on multi-scale patches. Afterwards, a similarity score of one candidate target is input into a particle filter framework to estimate the target state sequentially over time in visual tracking. Additionally, to decrease the visual drift caused by frequently updating model, we present a novel two-step object tracking method which exploits both the ground truth information of the target labeled in the first frame and the target obtained online with the multi-scale patch information. Experiments on some publicly available benchmarks of video sequences showed that the similarity involving complementary information can locate targets more accurately and the proposed tracker is more robust and effective than others.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号