首页 | 本学科首页   官方微博 | 高级检索  
     

基于全局背景与特征降维的视觉跟踪算法
引用本文:孙彦景, 王赛楠, 石韫开, 云霄, 施文娟. 基于全局背景与特征降维的视觉跟踪算法[J]. 电子与信息学报, 2018, 40(9): 2135-2142. doi: 10.11999/JEIT171143
作者姓名:孙彦景  王赛楠  石韫开  云霄  施文娟
作者单位:中国矿业大学信息与控制工程学院 徐州 221116
基金项目:江苏省自然科学基金青年基金(BK20150204),国家重点研发计划(2016YFC0801403),国家自然科学基金(51504214, 51504255, 51734009, 61771417),江苏省重点研发计划(BE2015040)
摘    要:相关滤波算法容易受到形变、运动模糊、相似背景等因素的干扰,导致跟踪任务失败。为了克服以上问题,该文提出一种基于全局背景与特征降维的视觉跟踪算法。该算法首先提取紧邻目标的图像区域作为负样本供分类器学习,以抑制相似背景的干扰;然后提出一种基于主成分分析的更新策略,构建降维矩阵压缩HOG特征的维度,在更新分类器的同时减少其冗余度;最后加入颜色特征表征运动目标,并根据特征对系统状态的响应强度进行自适应融合。在标准数据集上将该文提出的算法与Staple, KCF等其他算法进行了仿真对比,结果表明该文算法具有更强的鲁棒性,在形变因素的影响下,所提出的算法与Staple和KCF算法相比距离精度分别提升8.3%和13.1%。

关 键 词:视觉跟踪   全局背景信息   特征降维   自适应融合
收稿时间:2017-12-04
修稿时间:2018-05-02

Visual Tracking Algorithm Based on Global Context and Feature Dimensionality Reduction
Yanjing SUN, Sainan WANG, Yunkai SHI, Xiao YUN, Wenjuan SHI. Visual Tracking Algorithm Based on Global Context and Feature Dimensionality Reduction[J]. Journal of Electronics & Information Technology, 2018, 40(9): 2135-2142. doi: 10.11999/JEIT171143
Authors:Yanjing SUN  Sainan WANG  Yunkai SHI  Xiao YUN  Wenjuan SHI
Affiliation:School of Information and Control Engineering, China University of Mining Technology, Xuzhou 221116, China
Abstract:Tracking effects of algorithms using correlation filter are easily interfered by deformation, motion blur and background clustering, which can result in tracking failure. To solve these problems, a visual tracking algorithm based on global context and feature dimensionality reduction is proposed. Firstly, the image patches uniformly around the target are extracted as negative sample, and thus the similar background patches around the target are suppressed. Then, an update strategy based on principal component analysis is proposed, constructing the matrix to reduce the dimensionality of HOG feature, which can reduce the redundancy of feature when it updates. Finally, the color features are added to represent the motion target and the response of the system states are adaptively fused according to the features. Experiments are performed on recent online tracking benchmark. The results show that the proposed method performs favorably both in terms of accuracy and robustness compared to the state-of-the-art trackers such as Staple or KCF. When deformation occur, the proposed method is shown to outperform the Staple tracker and KCF algorithm by 8.3% and 13.1% respectively in median distance precision.
Keywords:Visual tracking  Global context information  Feautre dimensionality reduction  Adaptive fusion
点击此处可从《电子与信息学报》浏览原始摘要信息
点击此处可从《电子与信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号