首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an effective method for the detection and tracking of multiple moving objects from a video sequence captured by a moving camera without additional sensors. Moving object detection is relatively difficult for video captured by a moving camera, since camera motion and object motion are mixed. In the proposed method, the feature points in the frames are found and then classified as belonging to foreground or background features. Next, moving object regions are obtained using an integration scheme based on foreground feature points and foreground regions, which are obtained using an image difference scheme. Then, a compensation scheme based on the motion history of the continuous motion contours obtained from three consecutive frames is applied to increase the regions of moving objects. Moving objects are detected using a refinement scheme and a minimum bounding box. Finally, moving object tracking is achieved using a Kalman filter based on the center of gravity of a moving object region in the minimum bounding box. Experimental results show that the proposed method has good performance.  相似文献   

2.
This paper integrates fully automatic video object segmentation and tracking including detection and assignment of uncovered regions in a 2-D mesh-based framework. Particular contributions of this work are (i) a novel video object segmentation method that is posed as a constrained maximum contrast path search problem along the edges of a 2-D triangular mesh, and (ii) a 2-D mesh-based uncovered region detection method along the object boundary as well as within the object. At the first frame, an optimal number of feature points are selected as nodes of a 2-D content-based mesh. These points are classified as moving (foreground) and stationary nodes based on multi-frame node motion analysis, yielding a coarse estimate of the foreground object boundary. Color differences across triangles near the coarse boundary are employed for a maximum contrast path search along the edges of the 2-D mesh to refine the boundary of the video object. Next, we propagate the refined boundary to the subsequent frame by using motion vectors of the node points to form the coarse boundary at the next frame. We detect occluded regions by using motion-compensated frame differences and range filtered edge maps. The boundaries of detected uncovered regions are then refined by using the search procedure. These regions are either appended to the foreground object or tracked as new objects. The segmentation procedure is re-initialized when unreliable motion vectors exceed a certain number. The proposed scheme is demonstrated on several video sequences.  相似文献   

3.
In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data.  相似文献   

4.
基于时空信息的自动视频对象分割算法   总被引:5,自引:2,他引:3  
提出一种在通用视频序列中从背景里分离出运动对象的方法.首先,使用全局运动估计和补偿、场景变化检测技术进行预处理.然后,使用四阶统计显著性检测方法从帧差图像中分离出前景和背景,并使用连通成分标记算法和一连串形态学开启、闭合操作得到修正后的二值模板图.接着,使用对称差分技术消除覆盖\显露的背景以及部分噪声.最后,使用模板匹配和更新,不仅能够得到快速变化的对象,而且能够得到视频对象暂时停止运动的部分.  相似文献   

5.
A novel layered stereoscopic moving-object segmentation method is proposed in this paper by exploiting both motion information and depth information to extract moving objects for each depth layer with high accuracy on their shape boundary. By taking a higher-order statistics on two frame-difference fields across three adjacent frames, the computed motion information are used to conduct change detection and generate one motion mask that consists of all the moving objects from all the depth layers involved at each view. It would be highly desirable, and challenging, to further differentiate them according to their residing depth layer to achieve layered segmentation. For that, multiple depth-layer masks are generated using our proposed disparity estimation method, one for each depth layer. By intersecting the motion mask and one depth-layer mask at any given layer-of-interest, the moving objects associated with the corresponding layer are then extracted. All the above-mentioned processes are repeatedly performed along the video sequence with a sliding window of three frames at a time. For demonstration, only the foreground and the background layers are considered in this paper, while the proposed method is generic and can be straightforwardly extended to more layers, once the corresponding depth-layer masks are made available. Experimental results have shown that the proposed layered moving-object segmentation method is able to segment the foreground and background moving objects separately, with high accuracy on their shape boundary. In addition, the required computational load is considered fairly inexpensive, since our design methodology is to generate masks and perform intersections for extracting the moving objects for each depth layer.  相似文献   

6.
In this paper, we propose an adaptive and accurate moving cast shadow detection method employing online sub-scene shadow modeling and object inner-edges analysis for applications of static-camera video surveillance. To describe shadow appearance more accurately, the proposed method builds adaptive online shadow models for sub-scenes with different conditions of irradiance and reflectance. The online shadow models are learned by utilizing Gaussian functions to fit the significant peaks of accumulating histograms, which are calculated from Hue, Saturation and Intensity (HSI) difference of moving objects between background and foreground. Additionally, object inner-edges analysis is adopted to reject camouflages, which are misclassified foreground regions that are highly similar to shadows. Finally, the main shadow regions are expanded to recycle the misclassified shadow pixels based on local color constancy. The proposed algorithm can adaptively handle the shadow appearance changes and camouflages without prior information about illuminations and scenarios. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods.  相似文献   

7.
We present an unsupervised motion-based object segmentation algorithm for video sequences with moving camera, employing bidirectional inter-frame change detection. For every frame, two error frames are generated using motion compensation. They are combined and a segmentation algorithm based on thresholding is applied. We employ a simple and effective error fusion scheme and consider spatial error localization in the thresholding step. We find the optimal weights for the weighted mean thresholding algorithm that enables unsupervised robust moving object segmentation. Further, a post processing step for improving the temporal consistency of the segmentation masks is incorporated and thus we achieve improved performance compared to the previously proposed methods. The experimental evaluation and comparison with other methods demonstrate the validity of the proposed method.  相似文献   

8.
Unsupervised video object segmentation is a crucial application in video analysis when there is no prior information about the objects. It becomes tremendously challenging when multiple objects occur and interact in a video clip. In this paper, a novel unsupervised video object segmentation approach via distractor-aware online adaptation (DOA) is proposed. DOA models spatiotemporal consistency in video sequences by capturing background dependencies from adjacent frames. Instance proposals are generated by the instance segmentation network for each frame and they are grouped by motion information as positives or hard negatives. To adopt high-quality hard negatives, the block matching algorithm is then applied to preceding frames to track the associated hard negatives. General negatives are also introduced when there are no hard negatives in the sequence. The experimental results demonstrate these two kinds of negatives are complementary. Finally, we conduct DOA using positive, negative, and hard negative masks to update the foreground and background segmentation. The proposed approach achieves state-of-the-art results on two benchmark datasets, the DAVIS 2016 and the Freiburg-Berkeley motion segmentation (FBMS)-59.  相似文献   

9.
基于稀疏运动矢量场,提出一种动态背景下的运动 目标区域检测方法。根据运动矢量场特性分析进行全局运动 参数估计和全局运动补偿,实现动态场景中的背景校正;利用最大树数据结构, 基于运动矢量补偿误差分级表示视频帧中 运动基本一致的连通区域,进行运动区域初始分类;根据运动目标在空间上的连通性和运动 一致性的特点,选择区域相似性 度量准则,进行区域合并和滤波,将具有相似运动的连通区域合并,实现运动目标区域检测 。将检测出的运动目标区域作为 运动矢量外点反过来又应用于全局运动参数估计过程中,全局运动估计和运动目标区域检测 交替地进行,不仅加快了它们的 计算速度,同时也提高了它们计算和检测的准确性。实验结果表明,本文算法能较好地补偿 序列的全局运动,有效地检测出 局部目标运动区域。  相似文献   

10.
Moving object detection is one of the essential tasks for surveillance video analysis. The dynamic background often composed by waving trees, rippling water or fountains, etc. in nature scene greatly interferes with the detection of moving objects in the form of noise. In this paper, a method simulating heat conduction is proposed to extract moving objects from dynamic background video sequences. Based on the visual background extractor (ViBe) with an adaptable distance threshold, we design a temperature field relying on the generated mask image to distinguish between the moving objects and the noise caused by dynamic background. In temperature field, a brighter pixel is associated with more energy. It will transfer a certain amount of energy to its neighboring darker pixels. Through multiple steps of energy transfer the noise regions loss more energy so that they become darker than the detected moving objects. After heat conduction, K-Means algorithm with the customized initial clustering centers is utilized to separate the moving objects from background. We test our method on many videos with dynamic background from public datasets. The results show that the proposed method is feasible and effective for moving object detection from dynamic background sequences.  相似文献   

11.
运动视频对象的时空联合检测技术   总被引:1,自引:0,他引:1  
提出了一种具有全局运动的视频运动对象时空联合检测算法。针对传统时间分割使用主观固定阈值的缺点,采用了对差分图像进行噪声参数自适应学习的算法获取自动阈值,并利用形态学运算获取修正的时间分割模板;考虑传统分水岭空间分割的不足,提出了基于人眼视觉特征的改进分水岭算法,包括基于形态重建滤波的图像降噪、形态梯度变换以及基于韦伯感知原理的视同灰度非线性变换,有效地解决了过分割问题;对时、空间分割结果进行信息融合处理,从而得到完整的运动对象。仿真实验结果表明,本文算法可以快速准确地分割视频运动对象。  相似文献   

12.
运动目标的自动分割与跟踪   总被引:6,自引:0,他引:6  
该文提出了一种对视频序列中的运动目标进行自动分割的算法。该算法分析图像在L U V空间中的局部变化,同时使用运动信息来把目标从背景中分离出来。首先根据图像的局部变化,使用基于图论的方法把图像分割成不同的区域。然后,通过度量合成的全局运动与估计的局部运动之间的偏差来检测出运动的区域,运动的区域通过基于区域的仿射运动模型来跟踪到下一帧。为了提高提取的目标的时空连续性,使用Hausdorff跟踪器对目标的二值模型进行跟踪。对一些典型的MPEG-4测试序列所进行的评估显示了该算法的优良性能。  相似文献   

13.
王辉  孙洪 《信号处理》2016,32(12):1425-1434
针对基于矩阵分解的运动目标检测方法易受自然场景中背景的小幅抖动和摄像头抖动等因素影响的问题,提出了一种利用多尺度积的低秩稀疏矩阵分解算法。算法假设,静态背景视频序列中,每帧图像背景可近似视为处于同一低秩子空间中,图像前景则可视为偏离低秩空间的残差部分。首先对图像序列进行滤波、仿射变换等预处理得到视频序列观测数据矩阵;然后对数据矩阵进行低秩稀疏分解得到序列图像的低秩背景部分和每帧图像的稀疏前景部分;最后对稀疏前景部分采用小波变换模极大值与多尺度积方法检测目标边缘,并进行形态学处理,得到准确的运动目标。实验结果表明,算法检测到的运动目标清晰、完整,能有效地处理光照变化、摄像头小幅度抖动、图像背景局部小幅度变化等情况下的运动目标检测。   相似文献   

14.
Features of images are often used for cast shadow removal. A technique based on using only a single feature cannot universally distinguish an object pixel from a shadow pixel of a video frame. On the other hand, the use of multiple features increases the computational cost of a shadow removal technique considerably. In this paper, an efficient yet simple method for cast shadow removal from video sequences with static background using multiple features is developed. The basic idea of the proposed technique is that a simultaneous use of a small number of multiple features, if chosen judiciously, can reduce the similarity between object and shadow pixels without an excessive increase in the computational cost. Using the features of gray levels, color composition, and gradients of foreground and background pixels, a method is devised to create a complete object mask. First, based on each of the three features, three individual shadow masks are constructed, from which three corresponding object masks are obtained through a simple subtraction operation. The object masks are then merged together to generate a single object mask. Each of the three shadow masks is created so as to cover as many shadow pixels as possible, even if it results in falsely including in them some of the object pixels. As a result, the subsequent object masks may lose some of these pixels. However, the object pixels missed by one of the object masks should be able to be recovered by at least one of the other two, since they are generated based on features complementary to the one used to construct the first one. The final object mask obtained through a logical OR operation of the three individual masks can, therefore, be expected to include most of the object pixels. The proposed method is applied to a number of video sequences. The simulation results demonstrate that the proposed method provides a mechanism for shadow removal that is superior to some of the recently proposed techniques without imparting an excessive computational cost.  相似文献   

15.
为了在红外视频中准确分割运动目标,提出了一种基于边界评价的红外运动目标时空域分割的新方法。首先,利用运动目标在时域差分图像中的空洞效应,提取出最有意义运动目标种子点。重点是运动目标的空间分割,利用种子区域整体与局部的关系,在提取出的种子上进行区域生长,可以得到不同生长阈值下的运动目标分割掩膜。为确定最佳生长阈值,提出了一种无需先验知识的红外目标分割掩膜边界评价准则,并采用分割-评价-再分割-再评价的循环迭代模式,利用由粗到精的搜索方法,找出最佳的生长阈值,同时得到最佳的运动目标分割掩膜。实验证明,所提出的方法能在红外视频中准确分割出运动目标区域,效果良好,性能鲁棒。  相似文献   

16.
提出了一种新的基于时空信息的视频分割算法.即先将原始图像标记成不同的区域,然后以帧间差分得到的对象运动信息作为评判准则,将这些区域分别归类于前景对象和背景.达到对象分割的目的。特别是在区域标记的过程中,采用了一种新的基于分水岭的区域标识技术。通过对标准图像序列的实验结果可以看到,利用该算法能够较精确地分割出视频对象。  相似文献   

17.
提出了一种基于二维网格运动分析与改进形态学滤波空域自动分割策略相结合的视频对象时空分割算法。该算法首先利用高阶统计方法对视频图像的二维网格表示进行运动分析,快速得到前景对象区域,通过后处理有效获得前景对象运动检测掩膜。然后,用一种结合交变序列重建滤波算法和自适应阈值判别算法的改进分水岭分割策略有效获得前景对象的精确边缘。最后,用区域基时空融合算法将时域分割结果和空域分割结果结合起来提取出边缘精细的视频对象。实验结果表明,本算法综合了多种算法的优点,主客观分割效果理想。  相似文献   

18.
双层特征优化的视觉运动目标跟踪算法   总被引:4,自引:4,他引:0  
视觉监控中运动目标跟踪容易受到遮挡、目标快 速运动与外观变化等因素的素影响,单层特征难以有 效解决这些问题。为此,提出一种像素级与区域级特征组合优化的视觉跟踪算法。首 先在像素级利用 目标和背景区域颜色特征的后验概率对目标与背景进行初步判别;然后对候选区域进行超像 素分割,并依据 像素级的判断结果,在超像素区域内利用投票决策模型对目标与背景信息进行统计分析,得 到精确的目标位 置分布;最后结合均值漂移迭代搜索得到目标的准确位置,并利用双层判别结果对目标跟踪 过程的遮挡情况 进行检测,同时动态更新目标以及背景区域信息以适应目标外观与场景变化。与典型算法进 行对比的实验结 果表明,本文算法能够有效应对目标遮挡与快速运动等因素的影响,适用于复杂场景条件下 实时的运动目标跟踪。  相似文献   

19.
针对现有动态背景下目标分割算法存在的局限性,提出了一种融合运动线索和颜色信息的视频序列目标分割算法。首先,设计了一种新的运动轨迹分类方法,利用背景运动的低秩特性,结合累积确认的策略,可以获得准确的运动轨迹分类结果;然后,通过过分割算法获取视频序列的超像素集合,并计算超像素之间颜色信息的相似度;最后,以超像素为节点建立马尔可夫随机场模型,将运动轨迹分类信息以及超像素之间颜色信息统一建模在马尔可夫随机场的能量函数中,并通过能量函数最小化获得每个超像素的最优分类。在多组公开发布的视频序列中进行测试与对比,结果表明,本文方法可以准确分割出动态背景下的运动目标,并且较传统方法具有更高的分割准确率。  相似文献   

20.
胡柳  解梅 《红外技术》2006,28(5):271-274
针对野外复杂背景下红外运动车辆分割这一难题,提出了一种时空联合的运动目标分割算法.该算法首先通过自适应变化检测提取出初始目标,然后在初始目标外接矩形区域中做分水岭变换,最后通过基于初始目标模板投影和运动投影的区域合并,得到精确的目标.实验结果表明,该算法能快速精确地从复杂背景中分割出目标.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号