首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
针对在非线性机动目标跟踪中存在的滤波器易发散、跟踪误差大等问题,本文在多站纯方位跟踪的基础上,把Unscented卡尔曼滤波(Unscented Kalman filter,UKF)引进到交互多模型算法(Interacting multiple model,IMM)中,设计了交互多模型UKF滤波算法,克服了EKF中引入的较大线性化误差对机动目标跟踪算法性能的影响.最后将该算法与扩展卡尔曼滤波(EKF)、IMM-EKF算法进行了比较,仿真结果表明:IMM-UKF 算法增强了EKF滤波器的稳定性,提高了滤波收敛速度和跟踪精度.  相似文献   

2.
在实际雷达信号应用中,当目标的状态方程和观测方程在不同坐标系下得到时, 对目标状态的估计不再是线性的而是非线性的.为了提高在非线性情况下对目标的跟踪精度,为提高实时性和统计精度,提出了一种基于多普勒信息的UKF(Unscented Kalman Filter)滤波算法,算法是在原有UKF算法所有信息的基础上,引入目标的多普勒信息即径向速度,推导出新的测量模型和相应的滤波算法.应用matlab软件对目标的跟踪轨迹进行仿真,结果表明,引入雷达多普勒测量信息的UKF算法比传统的UKF算法和EKF算法具有更高的估计精度.  相似文献   

3.
根据扩展卡尔曼滤波(EKF)是目标跟踪中的常用算法,但方法涉及Taylor级数的展开,引起了运算量较大,运算结果的均值和协方差只精确到一阶,使得滤波精度不高.为提高滤波速度和精度,将Unscented变换与卡尔曼滤波相结合,建立了Unscented卡尔曼滤波(UKF)数学模型.Unscented变换是基于高斯分布理论,通过Sigma 点能够获取精确到三阶矩均值和协方差,提高了滤波精度.计算仅涉及标准的向量和矩阵操作,不需要计算非线性函数的Jacobian或者Hessians矩阵,提高了滤波速度.通过运动实验进行仿真对比,结果表明对于非线性目标跟踪系统,UKF算法具有更高的滤波精度和稳定性.  相似文献   

4.
基于UT变换与卡尔曼滤波的目标跟踪研究   总被引:2,自引:0,他引:2  
为了提高滤波速度和精度,将Unscented变换与卡尔曼滤波相结合,建立了Unscented卡尔曼滤波(UKF)数学模型.Unscented变换基于高斯分布理论,通过Sigma点能够获取精确到三阶矩的均值和协方差,提高了滤波精度.计算仅涉及标准的向量和矩阵操作,不需要计算非线性函数的Jacobian或者Hessians矩阵,提高了滤波速度.通过设计的运动实验进行仿真对比,实验结果表明,对于非线性目标跟踪系统,UKF算法具有更高的滤波精度和稳定性.  相似文献   

5.
为减小建模误差,建立了基于直接法进行惯导平台误差模型辨识的非线性模型.Unscented Kalman滤波(UKF)是一种新的非线性滤波算法,为此将其引入惯导平台的误差模型辨识中.针对系统模型的特点,对标准UKF算法进行了简化改进.改进的UKF算法计算量小、结构简单,滤波精度与标准UKF一致.同时应用扩展Kalman滤波(EKF)算法和改进的UKF算法进行了惯导平台误差模型辨识仿真研究.仿真结果表明,与EKF算法相比,改进的UKF算法的滤波精度显著提高.  相似文献   

6.
改进型粒子滤波算法在多站纯方位被动跟踪中的应用   总被引:2,自引:0,他引:2  
针对多站纯方位被动定位与跟踪问题,给出了一种基于均匀重采样和带白适应因子的改进型粒子滤波算法.首先,基于无迹卡尔曼(UKF)粒子滤波器,将参考分布融入最新观测信息,得到符合真实状态的后验概率分布:借助重采样和使用鲁棒估计,改善了粒子滤波的退化问题.其次,引入自适应因子以调整UKF的状态模型协方差与观测模型协方差的比例,得到较高精度的概率分布.仿真结果表明,改进的粒子滤波算法能够实现多站纯方位被动跟踪,比传统非线性滤波器有更高的跟踪精度.  相似文献   

7.
为提高对复杂环境下小目标的联合检测和跟踪性能,提出了一种基于粒子滤波的小目标先跟踪后检测(TBD)算法.通过粒子滤波对小目标的运动状态和出现状态进行联合采样,并采用Unscented粒子滤波(UPF)方法对TBD算法进行了具体实现.新算法真正将跟踪思想引入到目标检测中去,对小目标的联合检测与跟踪具有灵敏度高,捕获概率和跟踪性能高,对采样粒子数要求低等优点,能有效增强红外搜索与跟踪系统的"边检测边跟踪"能力.  相似文献   

8.
针对无迹卡尔曼滤波(Unscented Kalman Filter,UKF)在系统强非线性或状态模型不精确的情况下,存在滤波精度降低甚至发散的问题,提出一种改进的强跟踪SVD-UKF算法。该算法采用奇异值分解(Singular Value Decomposition,SVD)的方法改进UKF中状态协方差矩阵的迭代,保证协方差矩阵的非负定性及迭代的稳定性;算法基于强跟踪滤波(Strong Tracking filter,STF)理论框架,对改进的SVD-UKF引入多重渐消因子自适应调整状态协方差矩阵,在系统状态发生突变的情况下,实现系统真实状态的强跟踪。将该算法在BDS/INS组合导航中仿真验证,结果表明了该算法的有效性。  相似文献   

9.
针对非线性、非高斯系统状态的在线估计问题,提出了一种改进的粒子滤波算法。该算法采用Unscented卡尔曼滤波器(UKF)产生系统的状态估计,并在量测更新过程中加入衰减记忆因子,消弱滤波器对历史信息的依赖,增强当前量测信息对滤波器的修正作用,从而产生一个优选的建议分布函数,较好地抑制了粒子退化问题。理论分析和实验表明:引入记忆衰减因子的粒子滤波,即衰减记忆无味粒子滤波(MAUPF)的性能明显优于标准的粒子滤波以及Unscented粒子滤波。  相似文献   

10.
将粒子滤波(PF)算法应用于无线传感器网络(WSNS)的目标跟踪,并给出了粒子滤波实现的具体步骤。动态组织传感器网络节点成簇,实现了对网络中做匀速直线运动的单个目标的跟踪。分别采用扩展卡尔曼滤波(EKF)、无迹卡尔曼滤波(UKF)和PF算法进行了仿真试验。结果表明,在无线传感器网络目标跟踪领域,PF算法比EKF算法、UKF算法的滤波精度更高,性能更好,并且在实际应用中,由于该算法能够有效解决非线性、非高斯环境中的目标跟踪问题,实现简单而增强了可用性。  相似文献   

11.
目的 L1跟踪对局部遮挡具有良好的鲁棒性,但存在易产生模型漂移和计算速度慢的问题。针对这两个问题,该文提出了一种基于判别稀疏表示的视觉跟踪方法。方法 考虑到背景和遮挡信息的干扰,提出了一种判别稀疏表示模型,并基于块坐标优化原理,采用学习迭代收缩阈值算法和软阈值操作设计出了表示模型的快速求解算法。结果 在8组图像序列中,该文方法与现有的4种经典跟踪方法分别在鲁棒性和稀疏表示的计算时间方面进行了比较。在鲁棒性的定性和定量比较实验中,该文方法不仅表现出了对跟踪过程中的多种干扰因素具有良好的适应能力,而且在位置误差阈值从0~50像素的变化过程中,其精度曲线均优于实验中的其他方法;在稀疏表示的计算时间方面,在采用大小为16×16和32×32的模板进行跟踪时,该文算法的时间消耗分别为0.152 s和0.257 s,其时效性明显优于实验中的其他方法。结论 与经典的跟踪方法相比,该文方法能够在克服遮挡、背景干扰和外观改变等诸多不良因素的同时,实现快速目标跟踪。由于该文方法不仅具有较优的稀疏表示计算速度,而且能够克服多种影响跟踪鲁棒性的干扰因素,因此可以将其应用于视频监控和体育竞技等实际场合。  相似文献   

12.
加权局部特征结合判别式字典的目标跟踪   总被引:4,自引:3,他引:1       下载免费PDF全文
目的 当前大多数基于稀疏表示的跟踪方法只考虑全局特征或局部特征的最小重构误差,没有充分利用稀疏编码系数,或者忽略了字典判别性的作用,尤其当目标被相似物遮挡时,往往会导致跟踪目标丢失。针对上述问题,提出一种新的基于判别式字典和加权局部特征的稀疏外观模型(SPAM-DDWF)跟踪算法。方法 首先利用Fisher准则学习判别式字典,对提取的局部特征进行结构性分析来区分目标和背景,其次,提出一种新的基于加权的相似性度量方法来处理遮挡问题,从而提高跟踪的精确度。此外,基于重构系数的权重更新策略,使算法能更好地适应跟踪目标的外观变化,并降低了遮挡发生时跟踪漂移的概率。结果 在多个基准图像序列上,与多种流行方法对比,本文算法在光照变化、复杂背景、遮挡等场景中保持较高的跟踪成功率与较低的漂移误差。平均成功率和漂移误差分别为76.8%和3.7。结论 实验结果表明,本文算法具有较好的有效性和鲁棒性,尤其在目标被相似物遮挡的情况下,也能较准确地跟踪到目标。  相似文献   

13.
目的 视觉目标跟踪算法主要包括基于相关滤波和基于孪生网络两大类。前者虽然精度较高但运行速度较慢,无法满足实时要求。后者在速度和精度方面取得了出色的跟踪性能,然而,绝大多数基于孪生网络的目标跟踪算法仍然使用单一固定的模板,导致算法难以有效处理目标遮挡、外观变化和相似干扰物等情形。针对当前孪生网络跟踪算法的不足,提出了一种高效、鲁棒的双模板融合目标跟踪方法(siamese tracker with double template fusion,Siam-DTF)。方法 使用第1帧的标注框作为初始模板,然后通过外观模板分支借助外观模板搜索模块在跟踪过程中为目标获取合适、高质量的外观模板,最后通过双模板融合模块,进行响应图融合和特征融合。融合模块结合了初始模板和外观模板各自的优点,提升了算法的鲁棒性。结果 实验在3个主流的目标跟踪公开数据集上与最新的9种方法进行比较,在OTB2015(object tracking benchmark 2015)数据集中,本文方法的AUC(area under curve)得分和精准度分别为0.701和0.918,相比于性能第2的SiamRPN++(siamese region proposal network++)算法分别提高了0.6%和1.3%;在VOT2016(visual object tracking 2016)数据集中,本文方法取得了最高的期望平均重叠(expected average overlap,EAO)和最少的失败次数,分别为0.477和0.172,而且EAO得分比基准算法SiamRPN++提高了1.6%,比性能第2的SiamMask_E算法提高了1.1%;在VOT2018数据集中,本文方法的期望平均重叠和精确度分别为0.403和0.608,在所有算法中分别排在第2位和第1位。本文方法的平均运行速度达到47帧/s,显著超出跟踪问题实时性标准要求。结论 本文提出的双模板融合目标跟踪方法有效克服了当前基于孪生网络的目标跟踪算法的不足,在保证算法速度的同时有效提高了跟踪的精确度和鲁棒性,适用于工程部署与应用。  相似文献   

14.
Owing to the inherent lack of training data in visual tracking, recent work in deep learning-based trackers has focused on learning a generic representation offline from large-scale training data and transferring the pre-trained feature representation to a tracking task. Offline pre-training is time-consuming, and the learned generic representation may be either less discriminative for tracking specific objects or overfitted to typical tracking datasets. In this paper, we propose an online discriminative tracking method based on robust feature learning without large-scale pre-training. Specifically, we first design a PCA filter bank-based convolutional neural network (CNN) architecture to learn robust features online with a few positive and negative samples in the high-dimensional feature space. Then, we use a simple soft-thresholding method to produce sparse features that are more robust to target appearance variations. Moreover, we increase the reliability of our tracker using edge information generated from edge box proposals during the process of visual tracking. Finally, effective visual tracking results are achieved by systematically combining the tracking information and edge box-based scores in a particle filtering framework. Extensive results on the widely used online tracking benchmark (OTB-50) with 50 videos validate the robustness and effectiveness of the proposed tracker without large-scale pre-training.  相似文献   

15.
This paper presents a novel online object tracking algorithm with sparse representation for learning effective appearance models under a particle filtering framework. Compared with the state-of-the-art ? 1 sparse tracker, which simply assumes that the image pixels are corrupted by independent Gaussian noise, our proposed method is based on information theoretical Learning and is much less sensitive to corruptions; it achieves this by assigning small weights to occluded pixels and outliers. The most appealing aspect of this approach is that it can yield robust estimations without using the trivial templates adopted by the previous sparse tracker. By using a weighted linear least squares with non-negativity constraints at each iteration, a sparse representation of the target candidate is learned; to further improve the tracking performance, target templates are dynamically updated to capture appearance changes. In our template update mechanism, the similarity between the templates and the target candidates is measured by the earth movers’ distance(EMD). Using the largest open benchmark for visual tracking, we empirically compare two ensemble methods constructed from six state-of-the-art trackers, against the individual trackers. The proposed tracking algorithm runs in real-time, and using challenging sequences performs favorably in terms of efficiency, accuracy and robustness against state-of-the-art algorithms.  相似文献   

16.

Visual object tracking is of a great application value in video monitoring systems. Recent work on video tracking has taken into account spatial relationship between the targeted object and its background. In this paper, the spatial relationship is combined with the temporal relationship between features on different video frames so that a real-time tracker is designed based on a hash algorithm with spatio-temporal cues. Different from most of the existing work on video tracking, which is regarded as a mechanism for image matching or image classification alone, we propose a hierarchical framework and conduct both matching and classification tasks to generate a coarse-to-fine tracking system. We develop a generative model under a modified particle filter with hash fingerprints for the coarse matching by the maximum a posteriori and a discriminative model for the fine classification by maximizing a confidence map based on a context model. The confidence map reveals the spatio-temporal dynamics of the target. Because hash fingerprint is merely a binary vector and the modified particle filter uses only a small number of particles, our tracker has a low computation cost. By conducting experiments on eight challenging video sequences from a public benchmark, we demonstrate that our tracker outperforms eight state-of-the-art trackers in terms of both accuracy and speed.

  相似文献   

17.
目的 虽然基于稀疏表示的目标跟踪方法表现出了良好的跟踪效果,但仍然无法彻底解决噪声、旋转、遮挡、运动模糊、光照和姿态变化等复杂背景下的目标跟踪问题。针对遮挡、旋转、姿态变化和运动模糊问题,提出一种在粒子滤波框架内,基于稀疏表示和先验概率相结合的目标跟踪方法。方法 通过先验概率衡量目标模板的重要性,并将其引入到正则化模型中,作为模板更新的主要依据,从而获得一种新的候选目标稀疏表示模型。结果 在多个测试视频序列上,与多种流行算法相比,该算法可以达到更好的跟踪性能。在5个经典测试视频下的平均中心误差为6.77像素,平均跟踪成功率为97%,均优于其他算法。结论 实验结果表明,在各种含有遮挡、旋转、姿态变化和运动模糊的视频中,该算法可以稳定可靠地跟踪目标,适用于视频监控复杂场景下的目标跟踪。  相似文献   

18.
Visual tracking is an important task in various computer vision applications including visual surveillance, human computer interaction, event detection, video indexing and retrieval. Recent state of the art sparse representation (SR) based trackers show better robustness than many of the other existing trackers. One of the issues with these SR trackers is low execution speed. The particle filter framework is one of the major aspects responsible for slow execution, and is common to most of the existing SR trackers. In this paper,1 we propose a robust interest point based tracker in l1 minimization framework that runs at real-time with performance comparable to the state of the art trackers. In the proposed tracker, the target dictionary is obtained from the patches around target interest points. Next, the interest points from the candidate window of the current frame are obtained. The correspondence between target and candidate points is obtained via solving the proposed l1 minimization problem.In order to prune the noisy matches, a robust matching criterion is proposed, where only the reliable candidate points that mutually match with target and candidate dictionary elements are considered for tracking. The object is localized by measuring the displacement of these interest points. The reliable candidate patches are used for updating the target dictionary. The performance and accuracy of the proposed tracker is benchmarked with several complex video sequences. The tracker is found to be considerably fast as compared to the reported state of the art trackers. The proposed tracker is further evaluated for various local patch sizes, number of interest points and regularization parameters. The performance of the tracker for various challenges including illumination change, occlusion, and background clutter has been quantified with a benchmark dataset containing 50 videos.  相似文献   

19.
目的 低秩稀疏学习目标跟踪算法在目标快速运动和严重遮挡等情况下容易出现跟踪漂移现象,为此提出一种变分调整约束下的反向低秩稀疏学习目标跟踪算法。方法 采用核范数凸近似低秩约束描述候选粒子间的时域相关性,去除不相关粒子,适应目标外观变化。通过反向稀疏表示描述目标表观,用候选粒子稀疏表示目标模板,减少在线跟踪中L1优化问题的数目,提高跟踪效率。在有界变差空间利用变分调整对稀疏系数差分建模,约束目标表观在相邻帧间具有较小变化,但允许连续帧间差异存在跳跃不连续性,以适应目标快速运动。结果 实验利用OTB(object tracking benchmark)数据集中的4组涵盖了严重遮挡、快速运动、光照和尺度变化等挑战因素的标准视频序列进行测试,定性和定量对比了本文算法与5种热点算法的跟踪效果。定性分析基于视频序列的主要挑战因素进行比较,定量分析通过中心点位置误差(central pixel error,CPE)比较跟踪算法的精度。与CNT(convolutional networks training)、SCM(sparse collaborative model)、IST(inverse sparse tracker)、DDL(discriminative dictionary learning)和LLR(locally low-rank representation)算法相比,平均CPE值分别提高了2.80、4.16、13.37、35.94和41.59。实验结果表明,本文算法达到了较高的跟踪精度,对上述挑战因素更具鲁棒性。结论 本文提出的跟踪算法,综合了低秩稀疏学习和变分优化调整的优势,在复杂场景下具有较高的跟踪精度,特别是对严重遮挡和快速运动情况的有效跟踪更具鲁棒性。  相似文献   

20.
In this paper, we propose a novel visual tracking algorithm using the collaboration of generative and discriminative trackers under the particle filter framework. Each particle denotes a single task, and we encode all the tasks simultaneously in a structured multi-task learning manner. Then, we implement generative and discriminative trackers, respectively. The discriminative tracker considers the overall information of object to represent the object appearance; while the generative tracker takes the local information of object into account for handling partial occlusions. Therefore, two models are complementary during the tracking. Furthermore, we design an effective dictionary updating mechanism. The dictionary is composed of fixed and variational parts. The variational parts are progressively updated using Metropolis–Hastings strategy. Experiments on different challenging video sequences demonstrate that the proposed tracker performs favorably against several state-of-the-art trackers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号