首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
目的 视觉目标跟踪中,目标往往受到自身或场景中各种复杂干扰因素的影响,这对正确捕捉所感兴趣的目标信息带来极大的挑战。特别是,跟踪器所用的模板数据主要是在线学习获得,数据的可靠性直接影响到候选样本外观模型表示的精度。针对视觉目标跟踪中目标模板学习和候选样本外观模型表示等问题,采用一种较为有效的模板组织策略以及更为精确的模型表示技术,提出一种新颖的视觉目标跟踪算法。方法 跟踪框架中,将候选样本外观模型表示假设为由一组复合模板和最小重构误差组成的线性回归问题,首先利用经典的增量主成分分析法从在线高维数据中学习出一组低维子空间基向量(模板正样本),并根据前一时刻跟踪结果在线实时采样一些特殊的负样本加以扩充目标模板数据,再利用新组织的模板基向量和独立同分布的高斯—拉普拉斯混合噪声来线性拟合候选目标外观模型,最后估计出候选样本和真实目标之间的最大似然度,从而使跟踪器能够准确捕捉每一时刻的真实目标状态信息。结果 在一些公认测试视频序列上的实验结果表明,本文算法在目标模板学习和候选样本外观模型表示等方面比同类方法更能准确有效地反映出视频场景中目标状态的各种复杂变化,能够较好地解决各种不确定干扰因素下的模型退化和跟踪漂移问题,和一些优秀的同类算法相比,可以达到相同甚至更高的跟踪精度。结论 本文算法能够在线学习较为精准的目标模板并定期更新,使得跟踪器良好地适应内在或外在因素(姿态、光照、遮挡、尺度、背景扰乱及运动模糊等)所引起的视觉信息变化,始终保持其最佳的状态,使得候选样本外观模型的表示更加可靠准确,从而展现出更为鲁棒的性能。  相似文献   

2.
This paper presents a novel online object tracking algorithm with sparse representation for learning effective appearance models under a particle filtering framework. Compared with the state-of-the-art ? 1 sparse tracker, which simply assumes that the image pixels are corrupted by independent Gaussian noise, our proposed method is based on information theoretical Learning and is much less sensitive to corruptions; it achieves this by assigning small weights to occluded pixels and outliers. The most appealing aspect of this approach is that it can yield robust estimations without using the trivial templates adopted by the previous sparse tracker. By using a weighted linear least squares with non-negativity constraints at each iteration, a sparse representation of the target candidate is learned; to further improve the tracking performance, target templates are dynamically updated to capture appearance changes. In our template update mechanism, the similarity between the templates and the target candidates is measured by the earth movers’ distance(EMD). Using the largest open benchmark for visual tracking, we empirically compare two ensemble methods constructed from six state-of-the-art trackers, against the individual trackers. The proposed tracking algorithm runs in real-time, and using challenging sequences performs favorably in terms of efficiency, accuracy and robustness against state-of-the-art algorithms.  相似文献   

3.
This paper proposes a robust tracking method by the combination of appearance modeling and sparse representation. In this method, the appearance of an object is modeled by multiple linear subspaces. Then within the sparse representation framework, we construct a similarity measure to evaluate the distance between a target candidate and the learned appearance model. Finally, tracking is achieved by Bayesian inference, in which a particle filter is used to estimate the target state sequentially over time. With the tracking result, the learned appearance model will be updated adaptively. The combination of appearance modeling and sparse representation makes our tracking algorithm robust to most of possible target variations due to illumination changes, pose changes, deformations and occlusions. Theoretic analysis and experiments compared with state-of-the-art methods demonstrate the effectivity of the proposed algorithm.  相似文献   

4.
目的 随着深度神经网络的出现,视觉跟踪快速发展,视觉跟踪任务中的视频时空特性,尤其是时序外观一致性(temporal appearance consistency)具有巨大探索空间。本文提出一种新颖简单实用的跟踪算法——时间感知网络(temporal-aware network, TAN),从视频角度出发,对序列的时间特征和空间特征同时编码。方法 TAN内部嵌入了一个新的时间聚合模块(temporal aggregation module, TAM)用来交换和融合多个历史帧的信息,无需任何模型更新策略也能适应目标的外观变化,如形变、旋转等。为了构建简单实用的跟踪算法框架,设计了一种目标估计策略,通过检测目标的4个角点,由对角构成两组候选框,结合目标框选择策略确定最终目标位置,能够有效应对遮挡等困难。通过离线训练,在没有任何模型更新的情况下,本文提出的跟踪器TAN通过完全前向推理(fully feed-forward)实现跟踪。结果 在OTB(online object tracking: a benchmark)50、OTB100、TrackingNet、LaSOT(a high-quality benchmark for large-scale single object tracking)和UAV(a benchmark and simulator for UAV tracking)123公开数据集上的效果达到了小网络模型的领先水平,并且同时保持高速处理速度(70帧/s)。与多个目前先进的跟踪器对比,TAN在性能和速度上达到了很好的平衡,即使部分跟踪器使用了复杂的模板更新策略或在线更新机制,TAN仍表现出优越的性能。消融实验进一步验证了提出的各个模块的有效性。结论 本文提出的跟踪器完全离线训练,前向推理不需任何在线模型更新策略,能够适应目标的外观变化,相比其他轻量级的跟踪器,具有更优的性能。  相似文献   

5.
偏最小二乘(PLS)跟踪算法忽略特征间及外观模型间的差异,容易受到光照、遮挡等因素的影响,降低目标的跟踪精度.针对上述问题,文中提出基于多外观模型的自适应加权目标跟踪算法(AWMA).首先使用PLS对目标区域逐步建立多个外观模型.然后根据各外观模型中特征的重要性及目标的显著度建立自适应权重的综合模型,融合多个外观模型完成目标与样本的误差分析.最后使用粒子滤波实现目标跟踪.实验表明,文中算法能更有效地过滤噪声数据,提高目标跟踪的鲁棒性和时间性能.  相似文献   

6.
Tracking objects that undergo abrupt appearance changes and heavy occlusions is a challenging problem which conventional tracking methods can barely handle.To address the problem, we propose an online structure learning algorithm that contains three layers: an object is represented by a mixture of online structure models (OSMs) which are learnt from block-based online random forest classifiers (BORFs).BORFs are able to handle occlusion problems since they model local appearances of the target.To further improve the tracking accuracy and reliability, the algorithm utilizes mixture relational models (MRMs) as multi-mode context information to integrate BORFs into OSMs.Furthermore, the mixture construction of OSMs can avoid over-fitting effectively and is more flexible to describe targets.Fusing BORFs with MRMs, OSMs capture the discriminative parts of the target, which guarantees the reliability and robustness of our tracker.In addition, OSMs incorporate with block occlusion reasoning to update our BORFs and MRMs, which can deal with appearance changes and drifting problems effectively.Experiments on challenging videos show that the proposed tracker performs better than several state-of-the-art algorithms.  相似文献   

7.
针对在物体外观快速变化的情况下,大多数弱学习器不能捕获物体新的特征分布,导致追踪失败的问题,提出了高斯加权的联机多分类器增强算法。该算法为每一个领域问题定义一个弱分类器,每个弱分类器包括一个简单的视觉特征和阈值,引入高斯加权函数来权衡每个弱分类器在特定样本上的贡献,通过多分类器联合学习来提高追踪性能。在物体追踪过程中,联机多分类器在对物体定位的同时还能估计物体的姿态,能够成功地学习多模态外观模型,在物体外观快速变化的情况下追踪物体。实验结果表明:所提算法在经过一个较短序列的训练后,平均追踪错误率为12.8%,追踪性能明显提升。  相似文献   

8.
目的 视觉目标跟踪中,不同时刻的目标状态是利用在线学习的模板数据线性组合近似表示。由于跟踪中目标受到自身或场景中各种复杂干扰因素的影响,跟踪器的建模能力很大程度地依赖模板数据的概括性及其误差的估计精度。很多现有算法以向量形式表示样本信号,而改变其原始数据结构,使得样本数据各元素之间原有的自然关系受到严重破坏;此外,这种数据表述机制会提高数据的维度,而带来一定的计算复杂度和资源浪费。本文以多线性分析的角度更进一步深入研究视频跟踪中的数据表示及其建模机制,为其提供更加紧凑有效的解决方法。方法 本文跟踪框架中,候选样本及其重构信号以张量形式表示,从而保证其数据的原始结构。跟踪器输出候选样本外观状态时,以张量良好的多线性特性来组织跟踪系统的建模任务,利用张量核范数及L1范数正则化其目标函数的相关成分,在多任务状态学习假设下充分挖掘各候选样本外观表示任务的独立性及相互依赖关系。结果 用结构化张量表示的数据原型及其多任务观测模型能够较为有效地解决跟踪系统的数据表示及计算复杂度难题。同时,为候选样本外观模型的多任务联合学习提供更加简便有效的解决途径。这样,当跟踪器遇到破坏性较强的噪声干扰时,其张量核范数约束的误差估计机制在多任务联合学习框架下更加充分挖掘目标全面信息,使其更好地适应内在或外在因素所引起的视觉信息变化。在一些公认测试视频上的实验结果表明,本文算法在候选样本外观模型表示方面表现出更为鲁棒的性能。因而和一些优秀的同类算法相比,本文算法在各测试序列中跟踪到的目标图像块平均中心位置误差和平均重叠率分别达到4.2和0.82,体现出更好的跟踪精度。结论 大量实验验证本文算法的张量核范数回归模型及其误差估计机制能够构造出目标每一时刻状态更接近的最佳样本信号,在多任务学习框架下严格探测每一个候选样本的真实状态信息,从而较好地解决模型退化和跟踪漂移问题。  相似文献   

9.
The appearance model is an important issue in the visual tracking community. Most subspace-based appearance models focus on the time correlation between the image observations of the object, but the spatial layout information of the object is ignored. This paper proposes a robust appearance model for visual tracking which effectively combines the spatial and temporal eigen-spaces of the object in a tensor reconstruction way. In order to capture the variations in object appearance, an incremental updating strategy is developed to both update the eigen-space and mean of the object. Experimental results demonstrate that, compared with the state-of-the-art appearance models in the tracking literature, the proposed appearance model is more robust and effective.  相似文献   

10.
This paper proposes a novel visual tracking algorithm via online semi-supervised co-boosting, which investigates the benefits of co-boosting (i.e., the integration of co-training and boosting) and semi-supervised learning in the online tracking process. Existing discriminative tracking algorithms often use the classification results to update the classifier itself. However, the classification errors are easily accumulated during the self-training process. In this paper, we employ an effective online semi-supervised co-boosting framework to update the weak classifiers built on two different feature views. In this framework, the pseudo-label and importance of an unlabeled sample are estimated based on the additive logistic regression for an integration of a prior model and an online classifier learned on one feature view, and then used to update the weak classifiers built on the other feature view. The proposed algorithm has a good ability to recover from drifting by incorporating prior knowledge of the object while being adaptive to appearance changes by effectively combining the complementary strengths of different feature views. Experimental results on a series of challenging video sequences demonstrate the superior performance of our algorithm compared to state-of-the-art tracking algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号