首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 32 毫秒
1.
Tracking multiple objects is more challenging than tracking a single object. Some problems arise in multiple-object tracking that do not exist in single-object tracking, such as object occlusion, the appearance of a new object and the disappearance of an existing object, updating the occluded object, etc. In this article, we present an approach to handling multiple-object tracking in the presence of occlusions, background clutter, and changing appearance. The occlusion is handled by considering the predicted trajectories of the objects based on a dynamic model and likelihood measures. We also propose target-model-update conditions, ensuring the proper tracking of multiple objects. The proposed method is implemented in a probabilistic framework such as a particle filter in conjunction with a color feature. The particle filter has proven very successful for nonlinear and non-Gaussian estimation problems. It approximates a posterior probability density of the state, such as the object’s position, by using samples or particles, where each state is denoted as the hypothetical state of the tracked object and its weight. The observation likelihood of the objects is modeled based on a color histogram. The sample weight is measured based on the Bhattacharya coefficient, which measures the similarity between each sample’s histogram and a specified target model. The algorithm can successfully track multiple objects in the presence of occlusion and noise. Experimental results show the effectiveness of our method in tracking multiple objects.  相似文献   

2.
提出一种基于视觉注意机制的运动目标跟踪方法。该方法借鉴人类的视觉注意机制的研究成果,建立视觉注意机制的计算模型,计算视频中各部分内容的视觉显著性。结合视觉显著性计算结果,提取视频图像中的显著性目标。利用颜色分布模型作为目标的特征表示模型,与视频中各显著目标进行特征匹配,实现目标的跟踪。在多个视频序列中进行实验,并给出相应的实验结果及分析。实验结果表明,提出的目标检测与跟踪算法是正确有效的。  相似文献   

3.
4.
In this paper, we propose a discriminative multi-task objects tracking method with active feature selection and drift correction. The developed method formulates object tracking in a particle filter framework as multi-Task discriminative tracking. As opposed to generative methods that handle particles separately, the proposed method learns the representation of all the particles jointly and the corresponding coefficients are similar. The tracking algorithm starts from the active feature selection scheme, which adaptively chooses suitable number of discriminative features from the tracked target and background in the dynamic environment. Based on the selected feature space, the discriminative dictionary is constructed and updated dynamically. Only a few of them are used to represent all the particles at each frame. In other words, all the particles share the same dictionary templates and their representations are obtained jointly by discriminative multi-task learning. The particle that has the highest similarity with the dictionary templates is selected as the next tracked target state. This jointly sparsity and discriminative learning can exploit the relationship between particles and improve tracking performance. To alleviate the visual drift problem encountered in object tracking, a two-stage particle filtering algorithm is proposed to complete drift correction and exploit both the ground truth information of the first frame and observations obtained online from the current frame. Experimental evaluations on challenging sequences demonstrate the effectiveness, accuracy and robustness of the proposed tracker in comparison with state-of-the-art algorithms.  相似文献   

5.
AD-HOC (Appearance Driven Human tracking with Occlusion Classification) is a complete framework for multiple people tracking in video surveillance applications in presence of large occlusions. The appearance-based approach allows the estimation of the pixel-wise shape of each tracked person even during the occlusion. This peculiarity can be very useful for higher level processes, such as action recognition or event detection. A first step predicts the position of all the objects in the new frame while a MAP framework provides a solution for best placement. A second step associates each candidate foreground pixel to an object according to mutual object position and color similarity. A novel definition of non-visible regions accounts for the parts of the objects that are not detected in the current frame, classifying them as dynamic, scene or apparent occlusions. Results on surveillance videos are reported, using in-house produced videos and the PETS2006 test set.  相似文献   

6.
监控系统中的多摄像机协同   总被引:8,自引:0,他引:8  
描述了一个用于室内场合对多个目标进行跟踪的分布式监控系统.该系统由多个廉价的固定镜头的摄像机构成,具有多个摄像机处理模块和一个中央模块用于协调摄像机间的跟踪任务.由于每个运动目标有可能被多个摄像机同时跟踪,因此如何选择最合适的摄像机对某一目标跟踪,特别是在系统资源紧张时,成为一个问题.提出的新算法能根据目标与摄像机之间的距离并考虑到遮挡的情况,把目标分配给相应的摄像机,因此在遮挡出现时,系统能把遮挡的目标分配给能看见目标并距离最近的那个摄像机.实验表明该系统能协调好多个摄像机进行目标跟踪,并处理好遮挡问题.  相似文献   

7.
Aiming at tracking visual objects under harsh conditions, such as partial occlusions, illumination changes, and appearance variations, this paper proposes an iterative particle filter incorporated with an adaptive region-wise linear subspace (RWLS) representation of objects. The iterative particle filter employs a coarse-to-fine scheme to decisively generate particles that convey better hypothetic estimates of tracking parameters. As a result, a higher tracking accuracy can be achieved by aggregating the good hypothetic estimates from particles. Accompanying with the iterative particle filter, the RWLS representation is a special design to tackle the partial occlusion problem which often causes tracking failure. Moreover, the RWLS representation is made adaptive by exploiting an efficient incremental updating mechanism. This incremental updating mechanism can adapt the RWLS to gradual changes in object appearances and illumination conditions. Additionally, we also propose the adaptive mechanism to continuously adjust the object templates so that the varying appearances of tracked objects can be well handled. Experimental results demonstrate that the proposed approach achieves better performance than other related prior arts.  相似文献   

8.
Dynamic Template Tracking and Recognition   总被引:2,自引:0,他引:2  
In this paper we address the problem of tracking non-rigid objects whose local appearance and motion changes as a function of time. This class of objects includes dynamic textures such as steam, fire, smoke, water, etc., as well as articulated objects such as humans performing various actions. We model the temporal evolution of the object’s appearance/motion using a linear dynamical system. We learn such models from sample videos and use them as dynamic templates for tracking objects in novel videos. We pose the problem of tracking a dynamic non-rigid object in the current frame as a maximum a-posteriori estimate of the location of the object and the latent state of the dynamical system, given the current image features and the best estimate of the state in the previous frame. The advantage of our approach is that we can specify a-priori the type of texture to be tracked in the scene by using previously trained models for the dynamics of these textures. Our framework naturally generalizes common tracking methods such as SSD and kernel-based tracking from static templates to dynamic templates. We test our algorithm on synthetic as well as real examples of dynamic textures and show that our simple dynamics-based trackers perform at par if not better than the state-of-the-art. Since our approach is general and applicable to any image feature, we also apply it to the problem of human action tracking and build action-specific optical flow trackers that perform better than the state-of-the-art when tracking a human performing a particular action. Finally, since our approach is generative, we can use a-priori trained trackers for different texture or action classes to simultaneously track and recognize the texture or action in the video.  相似文献   

9.
Due to the prevalence of digital video camcorders, home videos have become an important part of life-logs of personal experiences. To enable efficient video parsing, a critical step is to automatically extract objects, events and scene characteristics present in videos. This paper addresses the problem of extracting objects from home videos. Automatic detection of objects is a classical yet difficult vision problem, particularly for videos with complex scenes and unrestricted domains. Compared with edited and surveillant videos, home videos captured in uncontrolled environment are usually coupled with several notable features such as shaking artifacts, irregular motions, and arbitrary settings. These characteristics have actually prohibited the effective parsing of semantic video content using conventional vision analysis. In this paper, we propose a new approach to automatically locate multiple objects in home videos, by taking into account of how and when to initialize objects. Previous approaches mostly consider the problem of how but not when due to the efficiency or real-time requirements. In home-video indexing, online processing is optional. By considering when, some difficult problems can be alleviated, and most importantly, enlightens the possibility of parsing semantic video objects. In our proposed approach, the how part is formulated as an object detection and association problem, while the when part is a saliency measurement to determine the best few locations to start multiple object initialization  相似文献   

10.
提出了一种利用视频图像对运动目标进行实时检测与跟踪的新方法.该方法利用基于改进的时间片的运动历史图像(tMHI)的灰度阶梯轮廓方法对多个运动目标进行检测,通过卡尔曼滤波器对多目标进行跟踪,并得到了各个运动目标的轨迹曲线,进而实现了对视频图像中多目标的跟踪.同时,该方法对多个目标的遮挡问题获得了明显的改善效果.实验结果表明,该方法能够对复杂场景下的多个目标进行有效的识别和准确的跟踪,系统的实时性强,识别率高,而且该方法对于复杂视频监视系统场景中的光照变化、雨雾等干扰具有较强的稳健性.  相似文献   

11.
We present a robust object tracking algorithm that handles spatially extended and temporally long object occlusions. The proposed approach is based on the concept of “object permanence” which suggests that a totally occluded object will re-emerge near its occluder. The proposed method does not require prior training to account for differences in the shape, size, color or motion of the objects to be tracked. Instead, the method automatically and dynamically builds appropriate object representations that enable robust and effective tracking and occlusion reasoning. The proposed approach has been evaluated on several image sequences showing either complex object manipulation tasks or human activity in the context of surveillance applications. Experimental results demonstrate that the developed tracker is capable of handling several challenging situations, where the labels of objects are correctly identified and maintained over time, despite the complex interactions among the tracked objects that lead to several layers of occlusions.  相似文献   

12.
13.
This paper presents an object tracking technique based on the Bayesian multiple hypothesis tracking (MHT) approach. Two algorithms, both based on the MHT technique are combined to generate an object tracker. The first MHT algorithm is employed for contour segmentation. The segmentation of contours is based on an edge map. The segmented contours are then merged to form recognisable objects. The second MHT algorithm is used in the temporal tracking of a selected object from the initial frame. An object is represented by key feature points that are extracted from it. The key points (mostly corner points) are detected using information obtained from the edge map. These key points are then tracked through the sequence. To confirm the correctness of the tracked key points, the location of the key points on the trajectory are verified against the segmented object identified in each frame. If an acceptable number of key-points lie on or near the contour of the object in a particular frame (n-th frame), we conclude that the selected object has been tracked (identified) successfully in frame n.  相似文献   

14.
目的 目标的长距离跟踪一直是视频监控中最具挑战性的任务之一。现有的目标跟踪方法在存在遮挡、目标消失再出现等情况下往往会丢失目标,无法进行持续有效的跟踪。一方面目标消失后再次出现时,将其作为新的目标进行跟踪的做法显然不符合实际需求;另一方面,在跟踪过程中当相似的目标出现时,也很容易误导跟踪器把该相似对象当成跟踪目标,从而导致跟踪失败。为此,提出一种基于目标识别辅助的跟踪算法来解决这个问题。方法 将跟踪问题转化为寻找帧间检测到的目标之间对应关系问题,从而在目标消失再现后,采用深度学习网络实现有效的轨迹恢复,改善长距离跟踪效果,并在一定程度上避免相似目标的干扰。结果 通过在标准数据集上与同类算法进行对比实验,本文算法在目标受到遮挡、交叉运动、消失再现的情况下能够有效地恢复其跟踪轨迹,改善跟踪效果,从而可以对多个目标进行持续有效的跟踪。结论 本文创新性地提出了一种结合基于深度学习的目标识别辅助的跟踪算法,实验结果证明了该方法对遮挡重现后的目标能够有效的恢复跟踪轨迹,适用在监控视频中对多个目标进行持续跟踪。  相似文献   

15.
Camera handoff is a crucial step to obtain a continuously tracked and consistently labeled trajectory of the object of interest in multi-camera surveillance systems. Most existing camera handoff algorithms concentrate on data association, namely consistent labeling, where images of the same object are identified across different cameras. However, there exist many unsolved questions in developing an efficient camera handoff algorithm. In this paper, we first design a trackability measure to quantitatively evaluate the effectiveness of object tracking so that camera handoff can be triggered timely and the camera to which the object of interest is transferred can be selected optimally. Three components are considered: resolution, distance to the edge of the camera’s field of view (FOV), and occlusion. In addition, most existing real-time object tracking systems see a decrease in the frame rate as the number of tracked objects increases. To address this issue, our handoff algorithm employs an adaptive resource management mechanism to dynamically allocate cameras’ resources to multiple objects with different priorities so that the required minimum frame rate is maintained. Experimental results illustrate that the proposed camera handoff algorithm can achieve a substantially improved overall tracking rate by 20% in comparison with the algorithm presented by Khan and Shah.  相似文献   

16.
目的 视频多目标跟踪(multiple object tracking,MOT)是计算机视觉中的一项重要任务,现有研究分别针对目标检测和目标关联部分进行改进,均忽视了多目标跟踪中的不一致问题。不一致问题主要包括3方面,即目标检测框中心与身份特征中心不一致、帧间目标响应不一致以及训练测试过程中相似度度量方式不一致。为了解决上述不一致问题,本文提出一种基于时空一致性的多目标跟踪方法,以提升跟踪的准确度。方法 从空间、时间以及特征维度对上述不一致性进行修正。对于目标检测框中心与身份特征中心不一致,针对每个目标检测框中心到特征中心之间的空间差异,在偏移后的位置上提取目标的ReID(re-identification)特征;对帧间响应不一致,使用空间相关计算相邻帧之间的运动偏移信息,基于该偏移信息对前一帧的目标响应进行变换后得到帧间一致性响应信息,然后对目标响应进行增强;对训练和测试过程中的相似度度量不一致,提出特征正交损失函数,在训练时考虑目标两两之间的相似关系。结果 在3个数据集上与现有方法进行比较。在MOT17、MOT20和Hieve数据集中,MOTA(multiple object tracking accuracy)值分别为71.2%、60.2%和36.1%,相比改进前的FairMOT算法分别提高了1.6%、3.2%和1.1%。与大多数其他现有方法对比,本文方法的MT(mostly tracked)比例更高,ML(mostly lost)比例更低,跟踪的整体性能更好。同时,在MOT17数据集中进行对比实验验证融合算法的有效性,结果表明提出的方法显著改善了多目标跟踪中的不一致问题。结论 本文提出的一致性跟踪方法,使特征在时间、空间以及训练测试中达成了更好的一致性,使多目标跟踪结果更加准确。  相似文献   

17.
Despite great progress achieved in 3-D pose tracking during the past years, occlusions and self-occlusions are still an open issue. This is particularly true in silhouette-based tracking where even visible parts cannot be tracked as long as they do not affect the object silhouette. Multiple cameras or motion priors can overcome this problem. However, multiple cameras or appropriate training data are not always readily available. We propose a framework in which the pose of 3-D models is found by minimising the 2-D projection error through minimisation of an energy function depending on the pose parameters. This framework makes it possible to handle occlusions and self-occlusions by tracking multiple objects and object parts simultaneously. Therefore, each part is described by its own image region each of which is modeled by one probability density function. This allows to deal with occlusions explicitly, which includes self-occlusions between different parts of the same object as well as occlusions between different objects. The results we present for simulations and real-world scenes demonstrate the improvements achieved in monocular and multi-camera settings. These improvements are substantiated by quantitative evaluations, e.g. based on the HumanEVA benchmark.  相似文献   

18.
一种快速多人脸跟踪算法   总被引:1,自引:0,他引:1  
张涛  蔡灿辉 《计算机应用》2009,29(3):781-784
提出一个基于Mean Shift的实时多人脸跟踪算法。通过引入自适应目标跟踪窗口,改进了Mean Shift算法的目标连续跟踪性能;提出序贯跟踪法解决多人脸跟踪过程中目标发生粘连重叠的问题;引入多辅助信息解决了相邻两帧中人脸的对应问题。为进一步提高整个算法的跟踪速度和鲁棒性,引入卡尔曼滤波器对目标进行预测。实验结果表明该算法具有很好的实时性和跟踪效果。  相似文献   

19.
W4 is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. W4 employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. W4 can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. W4 can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320×240 resolution images on a 400 MHz dual-Pentium II PC  相似文献   

20.
Tracking of moving objects in real-time scenes is a challenging research problem in computer vision. This is due to incessant live changes in the object features, background, occlusions, and illumination deviations occurring at different instances in the scene. With the objective of tracking visual objects in intricate videos, this paper presents a new color-independent tracking approach, the contributions of which are threefold. First, the illumination level of the sequences is maintained constant using fast discrete curvelet transform. Fisher information metric is calculated based on a cumulative score by comparing the template patches with a reference template at different timeframes. This metric is used for quantifying distances between the consecutive frame histogram distributions. Then, a novel iterative algorithm called conditionally adaptive multiple template update is proposed to regulate the object templates for handling dynamic occlusions effectively. The proposed method is evaluated on a set of extensive challenging benchmark datasets. Experimental results in terms of Center Location Error (CLE), Tracking Success Score (TSS), and Occlusion Success Score (OSS) show that the proposed method competes well with other relevant state-of-the-art tracking methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号