首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Appearance modeling is very important for background modeling and object tracking. Subspace learning-based algorithms have been used to model the appearances of objects or scenes. Current vector subspace-based algorithms cannot effectively represent spatial correlations between pixel values. Current tensor subspace-based algorithms construct an offline representation of image ensembles, and current online tensor subspace learning algorithms cannot be applied to background modeling and object tracking. In this paper, we propose an online tensor subspace learning algorithm which models appearance changes by incrementally learning a tensor subspace representation through adaptively updating the sample mean and an eigenbasis for each unfolding matrix of the tensor. The proposed incremental tensor subspace learning algorithm is applied to foreground segmentation and object tracking for grayscale and color image sequences. The new background models capture the intrinsic spatiotemporal characteristics of scenes. The new tracking algorithm captures the appearance characteristics of an object during tracking and uses a particle filter to estimate the optimal object state. Experimental evaluations against state-of-the-art algorithms demonstrate the promise and effectiveness of the proposed incremental tensor subspace learning algorithm, and its applications to foreground segmentation and object tracking.  相似文献   

2.
In this paper, we present a structured sparse representation appearance model for tracking an object in a video system. The mechanism behind our method is to model the appearance of an object as a sparse linear combination of structured union of subspaces in a basis library, which consists of a learned Eigen template set and a partitioned occlusion template set. We address this structured sparse representation framework that preferably matches the practical visual tracking problem by taking the contiguous spatial distribution of occlusion into account. To achieve a sparse solution and reduce the computational cost, Block Orthogonal Matching Pursuit (BOMP) is adopted to solve the structured sparse representation problem. Furthermore, aiming to update the Eigen templates over time, the incremental Principal Component Analysis (PCA) based learning scheme is applied to adapt the varying appearance of the target online. Then we build a probabilistic observation model based on the approximation error between the recovered image and the observed sample. Finally, this observation model is integrated with a stochastic affine motion model to form a particle filter framework for visual tracking. Experiments on some publicly available benchmark video sequences demonstrate the advantages of the proposed algorithm over other state-of-the-art approaches.  相似文献   

3.
The appearance model is an important issue in the visual tracking community. Most subspace-based appearance models focus on the time correlation between the image observations of the object, but the spatial layout information of the object is ignored. This paper proposes a robust appearance model for visual tracking which effectively combines the spatial and temporal eigen-spaces of the object in a tensor reconstruction way. In order to capture the variations in object appearance, an incremental updating strategy is developed to both update the eigen-space and mean of the object. Experimental results demonstrate that, compared with the state-of-the-art appearance models in the tracking literature, the proposed appearance model is more robust and effective.  相似文献   

4.
基于粒子滤波与稀疏表达的目标跟踪方法   总被引:1,自引:0,他引:1  
针对视频序列图像目标跟踪中的光照变化问题,提出一种在粒子滤波器框架内,基于目标的局部二元模式(LBP)纹理特征,使用稀疏表达进行目标跟踪的方法。当前帧的跟踪粒子由前一帧的跟踪结果按高斯分布来生成。通过解l1正则化最小二乘方问题,获得每个粒子对应于模板子空间的稀疏表达,确定当前帧图像中的跟踪目标。然后使用粒子滤波器生成下一帧跟踪的粒子分布。在跟踪过程中采用新的动态模板更新策略更新模板空间中的模板。实验结果证明该方法的有效性和先进性。  相似文献   

5.

针对传统基于稀疏表示的目标跟踪方法中, 当场景中含有与目标相似的背景时容易出现跟踪漂移的问题, 提出一种新的目标跟踪方法. 该方法基于目标的局部二元模式特征, 将目标外观模型同时用原始目标模板与当前帧部分粒子构成的联合模板稀疏表示, 构建一个联合目标函数, 将跟踪问题通过迭代转化为求解最优化问题. 实验结果表明, 所提出跟踪方法在解决遮挡、光照等问题的同时, 对场景中含有与目标相似背景的序列具有较好的跟踪效果.

  相似文献   

6.
In this paper, we formulate visual tracking as a binary classification problem using a discriminative appearance model. To enhance the discriminative strength of the classifier in separating the object from the background, an over-complete dictionary containing structure information of both object and background is constructed which is used to encode the local patches inside the object region with sparsity constraint. These local sparse codes are then aggregated for object representation, and a classifier is learned to discriminate the target from the background. The candidate sample with largest classification score is considered as the tracking result. Different from recent sparsity-based tracking approaches that update the dictionary using a holistic template, we introduce a selective update strategy based on local image patches which alleviates the visual drift problem, especially when severe occlusion occurs. Experiments on challenging video sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods.  相似文献   

7.
We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depth cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our framework combines image gradient orientations as extracted from intensity images with the directions of surface normals computed from dense depth fields. We propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original Active Appearance Models (AAMs). To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles which does not require off-line training, and can be efficiently implemented online. The robustness of learning from orientation appearance models is presented both theoretically and experimentally in this work. This kernel enables us to cope with gross measurement errors, missing data as well as other typical problems such as illumination changes and occlusions. By combining the proposed models with a particle filter, the proposed framework was used for performing 2D plus 3D rigid object tracking, achieving robust performance in very difficult tracking scenarios including extreme pose variations.  相似文献   

8.
In this paper, a robust and efficient visual tracking method through the fusion of several distributed adaptive templates is proposed. It is assumed that the target object is initially localized either manually or by an object detector at the first frame. The object region is then partitioned into several non-overlapping subregions. The new location of each subregion is found by an EM1-like gradient-based optimization algorithm. The proposed localization algorithm is capable of simultaneously optimizing several possible solutions in a probabilistic framework. Each possible solution is an initializing point for the optimization algorithm which improves the accuracy and reliability of the proposed gradient-based localization method to the local extrema. Moreover, each subregion is defined by two adaptive templates named immediate and delayed templates to solve the “drift” problem.2 The immediate template is updated by short-term appearance changes whereas the delayed template models the long-term appearance variations. Therefore, the combination of short-term and long-term appearance modeling can solve the template tracking drift problem. At each tracking step, the new location of an object is estimated by fusing the tracking result of each subregion. This fusion method is based on the local and global properties of the object motion to increase the robustness of the proposed tracking method against outliers, shape variations, and scale changes. The accuracy and robustness of the proposed tracking method is verified by several experimental results. The results also show the superior efficiency of the proposed method by comparing it to several state-of-the-art trackers as well as the manually labeled “ground truth” data.  相似文献   

9.
目的 视觉目标跟踪中,目标往往受到自身或场景中各种复杂干扰因素的影响,这对正确捕捉所感兴趣的目标信息带来极大的挑战。特别是,跟踪器所用的模板数据主要是在线学习获得,数据的可靠性直接影响到候选样本外观模型表示的精度。针对视觉目标跟踪中目标模板学习和候选样本外观模型表示等问题,采用一种较为有效的模板组织策略以及更为精确的模型表示技术,提出一种新颖的视觉目标跟踪算法。方法 跟踪框架中,将候选样本外观模型表示假设为由一组复合模板和最小重构误差组成的线性回归问题,首先利用经典的增量主成分分析法从在线高维数据中学习出一组低维子空间基向量(模板正样本),并根据前一时刻跟踪结果在线实时采样一些特殊的负样本加以扩充目标模板数据,再利用新组织的模板基向量和独立同分布的高斯—拉普拉斯混合噪声来线性拟合候选目标外观模型,最后估计出候选样本和真实目标之间的最大似然度,从而使跟踪器能够准确捕捉每一时刻的真实目标状态信息。结果 在一些公认测试视频序列上的实验结果表明,本文算法在目标模板学习和候选样本外观模型表示等方面比同类方法更能准确有效地反映出视频场景中目标状态的各种复杂变化,能够较好地解决各种不确定干扰因素下的模型退化和跟踪漂移问题,和一些优秀的同类算法相比,可以达到相同甚至更高的跟踪精度。结论 本文算法能够在线学习较为精准的目标模板并定期更新,使得跟踪器良好地适应内在或外在因素(姿态、光照、遮挡、尺度、背景扰乱及运动模糊等)所引起的视觉信息变化,始终保持其最佳的状态,使得候选样本外观模型的表示更加可靠准确,从而展现出更为鲁棒的性能。  相似文献   

10.
针对视觉跟踪中的目标遮挡问题,提出一种基于稀疏表达的视觉跟踪算法。采用稀疏表达方法描述跟踪目标,构造基于Gabor特征的目标词典和遮挡词典,通过l1范数最优化求解稀疏表达系数。在粒子滤波框架下跟踪目标,根据稀疏表达系数判断遮挡,并利用重构残差更新遮挡情况下的粒子权重。在目标模板更新时,通过引入可靠性评价来抑制模板漂移。实验结果表明,该算法能够有效地跟踪处于遮挡状态下的运动目标,并对目标姿态变化以及光照变化具有较好的鲁棒性。  相似文献   

11.
稀疏表示的Lucas-Kanade目标跟踪   总被引:2,自引:1,他引:1       下载免费PDF全文
提出一种新的目标跟踪算法,将稀疏表示应用于LK(Lucas-Kanade)图像配准框架.通过最小化校准误差的L1范数来求解目标的状态参数,从而实现对目标的准确跟踪.对目标同时建立两个外观模型:动态字典和静态模板,其中动态模型由动态字典的稀疏表示来描述目标外观.为了解决由于动态字典不断更新造成的跟踪漂移问题,一个两阶段迭代机制被采用.两个阶段所采用的目标模型分别为动态字典和静态模板.大量的实验结果表明,本文算法能有效应对外观变化、局部遮挡、光照变化等挑战,同时具有较好的实时性.  相似文献   

12.
李飞彬  曹铁勇  黄辉  王文 《计算机应用》2015,35(12):3555-3559
针对视频目标鲁棒跟踪问题,提出了一种基于稀疏表示的生成式算法。首先提取特征构建目标和背景模板,并利用随机抽样获得足够多的候选目标状态;然后利用多任务反向稀疏表示算法得到稀疏系数矢量构造相似度测量图,这里引入了增广拉格朗日乘子(ALM)算法解决L1-min难题;最后从相似度图中使用加性池运算提取判别信息选择与目标模板相似度最高并与背景模板相似度最小的候选目标状态作为跟踪结果,该算法是在贝叶斯滤波框架下实现的。为了适应跟踪过程中目标外观由于光照变化、遮挡、复杂背景以及运动模糊等场景引起的变化,制定了简单却有效的更新机制,对目标和背景模板进行更新。对仿真结果的定性和定量评估均表明与其他跟踪算法相比,所提算法的跟踪准确性和稳定性有了一定的提高,能有效地解决光照和尺度变化、遮挡、复杂背景等场景的跟踪难题。  相似文献   

13.
经典稀疏表示目标跟踪算法在处理复杂视频时不免出现跟踪不稳定情况且当目标发生遮挡时易发生漂移现象。针对这一问题,提出一种基于子区域匹配的稀疏表示跟踪算法。首先,将初始目标模板划分为若干子区域,利用LK图像配准算法建立观测模型预测下一帧目标运动状态。然后,对预测的目标模型区域进行同等划分,并在匹配过程中寻找最优子区域。最后,在模板更新过程中引入一种新的模板校正机制,能够有效克服漂移现象。将该算法与多种目标跟踪算法在不同视频序列下进行对比,实验结果表明在目标发生遮挡、运动、光照影响及复杂背景等情况下该算法具有较为理想的跟踪效果,并与经典稀疏表示跟踪算法相比具有较好的跟踪性能。  相似文献   

14.
在视频跟踪中,模型表示是直接影响跟踪效率的核心问题之一.在随时间和空间变化的复杂数据中学习目标外观模型表示所需的有效模板,从而适应内在或外在因素所引起的目标状态变化是非常重要的.文中详细描述较为鲁棒的目标外观模型表示策略,并提出一种新的多任务最小软阈值回归跟踪算法(MLST).该算法框架将候选目标的观测模型假设为多任务线性回归问题,利用目标模板和独立同分布的高斯-拉普拉斯重构误差线性表示候选目标不同状态下的外观模型,从而跟踪器能够很好地适应各种复杂场景并准确预测每一时刻的真实目标状态.大量实验证明,文中在线学习策略能够充分挖掘目标在不同时刻的特殊状态信息以提高模型表示精度,使得跟踪器保持最佳的状态,从而在一定程度上提高跟踪性能.实验结果显示,本文算法体现较好的鲁棒性并优于一些目前较先进的跟踪算法.  相似文献   

15.
This paper presents a novel formulation for contour tracking.We model the second-order statistics of image regions and perform covariance matching under the variational level set framework.Specifically,covariance matrix is adopted as a visual object representation for partial differential equation(PDE) based contour tracking.Log-Euclidean calculus is used as a covariance distance metric instead of Euclidean distance which is unsuitable for measuring the similarities between covariance matrices,because the matrices typically lie on a non-Euclidean manifold.A novel image energy functional is formulated by minimizing the distance metric between the candidate object region and a given template,and maximizing the one between the background region and the template.The corresponding gradient flow is then derived according to a variational approach,enabling partial differential equations(PDEs) based contour tracking.Experiments on several challenging sequences prove the validity of the proposed method.  相似文献   

16.
为了提高稀疏表示跟踪模型性能,提出基于全局与局部结构反稀疏外观模型的目标跟踪算法(GLIS).首先采用反稀疏表达方式一次求解优化问题,计算所有粒子权重以提升算法实时性.然后,提出基于联合判别相似度图(JDS map)排名机制以提升算法鲁棒性,将候选目标分块并分别计算加权稀疏解,联结不同权重的局部块为整体并计算其稀疏解.最后采用联合机制将2种稀疏解合并为JDS map.在跟踪过程中,采用双重模板更新机制更新目标模板及权重模板.实验表明,在复杂环境下,文中算法仍然可以准确跟踪目标.  相似文献   

17.
目的 虽然基于稀疏表示的目标跟踪方法表现出了良好的跟踪效果,但仍然无法彻底解决噪声、旋转、遮挡、运动模糊、光照和姿态变化等复杂背景下的目标跟踪问题。针对遮挡、旋转、姿态变化和运动模糊问题,提出一种在粒子滤波框架内,基于稀疏表示和先验概率相结合的目标跟踪方法。方法 通过先验概率衡量目标模板的重要性,并将其引入到正则化模型中,作为模板更新的主要依据,从而获得一种新的候选目标稀疏表示模型。结果 在多个测试视频序列上,与多种流行算法相比,该算法可以达到更好的跟踪性能。在5个经典测试视频下的平均中心误差为6.77像素,平均跟踪成功率为97%,均优于其他算法。结论 实验结果表明,在各种含有遮挡、旋转、姿态变化和运动模糊的视频中,该算法可以稳定可靠地跟踪目标,适用于视频监控复杂场景下的目标跟踪。  相似文献   

18.
目的 视觉目标跟踪中,不同时刻的目标状态是利用在线学习的模板数据线性组合近似表示。由于跟踪中目标受到自身或场景中各种复杂干扰因素的影响,跟踪器的建模能力很大程度地依赖模板数据的概括性及其误差的估计精度。很多现有算法以向量形式表示样本信号,而改变其原始数据结构,使得样本数据各元素之间原有的自然关系受到严重破坏;此外,这种数据表述机制会提高数据的维度,而带来一定的计算复杂度和资源浪费。本文以多线性分析的角度更进一步深入研究视频跟踪中的数据表示及其建模机制,为其提供更加紧凑有效的解决方法。方法 本文跟踪框架中,候选样本及其重构信号以张量形式表示,从而保证其数据的原始结构。跟踪器输出候选样本外观状态时,以张量良好的多线性特性来组织跟踪系统的建模任务,利用张量核范数及L1范数正则化其目标函数的相关成分,在多任务状态学习假设下充分挖掘各候选样本外观表示任务的独立性及相互依赖关系。结果 用结构化张量表示的数据原型及其多任务观测模型能够较为有效地解决跟踪系统的数据表示及计算复杂度难题。同时,为候选样本外观模型的多任务联合学习提供更加简便有效的解决途径。这样,当跟踪器遇到破坏性较强的噪声干扰时,其张量核范数约束的误差估计机制在多任务联合学习框架下更加充分挖掘目标全面信息,使其更好地适应内在或外在因素所引起的视觉信息变化。在一些公认测试视频上的实验结果表明,本文算法在候选样本外观模型表示方面表现出更为鲁棒的性能。因而和一些优秀的同类算法相比,本文算法在各测试序列中跟踪到的目标图像块平均中心位置误差和平均重叠率分别达到4.2和0.82,体现出更好的跟踪精度。结论 大量实验验证本文算法的张量核范数回归模型及其误差估计机制能够构造出目标每一时刻状态更接近的最佳样本信号,在多任务学习框架下严格探测每一个候选样本的真实状态信息,从而较好地解决模型退化和跟踪漂移问题。  相似文献   

19.
基于全卷积孪生网络的视频目标跟踪算法由于在跟踪过程中使用单一模板,在运动目标外观发生变化时容易出现跟踪漂移并导致精度下降.因此,提出了一种基于孪生网络融合多模板的目标跟踪算法.该算法可在特征级上建立模板库,并使用平均峰值相关能量和模板相似度来保证模板库中各个模板的有效性,从而对多个响应图进行融合以获得更高的跟踪精度.O...  相似文献   

20.
张环  刘肖琳 《计算机仿真》2006,23(10):199-201,226
为了在图像序列中实现目标的快速定位和实时跟踪,该文提出了一种基于可变模型的快速目标跟踪算法,在已知模型条件下,利用区域模型相关匹配的思想对目标模型进行实时更新,充分利用目标莲续运动过程中目标形状在两个连续帧中变化不大、相邻两帧中目标的速度和位移变化不大的特点,以当前帧目标模型作为下一帧的先验模型;综合运用模型梯度信息、运动信息和模型区域特征匹配的方法来跟踪目标。由于算法综合考虑了目标模型的区域信息和轮廓信息,因此对背景干扰不太敏感。在头部跟踪实验过程中,该文算法跟踪移动目标的实时性和准确性比较好,抗干扰能力较强,基本上可以满足鲁棒性和快速性的要求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号