首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
针对传统的mean-shift跟踪算法基于单一颜色特征空间,在复杂背景下难以对目标进行准确跟踪这问题,提出了一种结合ORB特征匹配的mean-shift目标跟踪算法。该算法在mean-shift算法的基础上利用改进的ORB特征匹配算法修正目标跟踪窗口并实时更新目标特征模板,通过计算前后两帧图像中目标中心的欧式距离与色彩模板的巴氏距离来判定跟踪是否失败,当目标跟踪失败时,不改变目标模板,继续搜索下一帧图像中的目标。实验结果表明,与均值漂移算法和基于其他同类特征的改进算法相比,该算法提高了在复杂背景下目标跟踪的精度,并能满足实时性要求。  相似文献   

2.
基于运动方向的异常行为检测   总被引:7,自引:0,他引:7  
胡芝兰  江帆  王贵锦  林行刚  严洪 《自动化学报》2008,34(11):1348-1357
提出了一种基于运动方向的异常行为检测方法. 根据不同行为的运动方向具有不同的规律性, 该方法采用块运动方向描述不同的动作, 并利用支持向量机(Support vector machine, SVM)对实时监控视频进行异常行为检测. 为了减少噪声运动的影响, 同时有效保留小幅度运动的前景目标, 在行为描述之前, 本文采用了背景边缘模型对每一视频帧进行前景帧(有目标出现的视频帧)判断. 在行为描述时, 先提取相应视频段的所有前景帧的块运动方向, 然后对这些运动方向进行归一化直方图统计得到该视频段的行为特征. 在走廊等公共场景中的实验结果表明, 该方法能够对单人以及多人的复杂行为进行有效检测, 对运动过程中目标大小的变化、光照的变化以及噪声等具有较好的鲁棒性, 而且计算复杂度小, 能够实现实时监控.  相似文献   

3.
为了从包含动态背景或者非平移运动前景的视频中提取完整的前景区域,提出一种视频分割算法。首先,将视频中单个像素的变化过程视为离散时间信号,运用时间轴的Gabor滤波对时域信息进行分析,将视频粗分为前景和背景;然后,运用均值漂移算法对前景和背景做颜色聚类分析,分析空域的颜色关联信息,分别建立全局颜色模型和局部颜色模型;最后,运用双重标记法提取视频前景。该算法综合考虑视频的时域信息和空域信息。在多个视频库的测试结果表明,该算法可以显著提高前景区域提取的精度,特别是对于背景动态变化或者前景发生非平移运动的视频。  相似文献   

4.
提出一种适合全局运动视频中自动探测与跟踪非刚性对象的OT-GAV模型.该模型首先利用基于区域相关性的RDM算法计算相邻帧区域匹配,并结合Q学习与K-S统计法优化匹配结果,获得较为精确的区域运动向量.然后,利用前景和背景存在的运动形态差异,区域动态纹理一致性及对象运动过程中保持区域完整性的特点,逐步实现前景对象区域的探测与合并.实验证明,本模型及其相关算法可在室内和室外环境下,自动探测前景关注对象,获得其较为精确的边缘信息,并实施有效的跟踪.同时,该模型还能够解决对象跟踪过程中的"空洞"问题.  相似文献   

5.
详细论述了基于块匹配的鲁棒运动估计算法。跟已有的基于块匹配的运动估计算法比较,首先,我们引入颜色信息来提高运动估计的准确性;其次,在更广泛的意义上运用自适应策略来减少计算量并同时保证算法的鲁棒性;最后,提出的基于预测修正的复合查找方法充分利用了物体运动的全局信息,克服了三步查找算法以及全查找算法的缺点并充分发挥它们二者的优点从而提高查找的效率和匹配精度。实验结果表明基于块匹配的鲁棒运动估计算法具有抗干扰能力强、运动估计准确、计算效率高等优点。  相似文献   

6.
基于自适应多特征融合的mean shift目标跟踪   总被引:3,自引:0,他引:3  
经典mean shift目标跟踪算法简单快速,具有较好的跟踪效果,但是它用单个特征描述目标,易受相似目标与背景的干扰,鲁棒性较差.针对此不足,推导出多特征融合mean shift目标定位公式;为了适应跟踪过程中目标与背景的变化,提出利用概率分布可分性判据动态评价特征对目标与背景的区分能力,并自适应地计算特征融合权重.在上述两个方面的基础上,对mean shift目标跟踪算法进行了改进,提出一种多特征融合mean shift目标跟踪算法.实验结果表明:提出的算法比经典mean shift目标跟踪算法具有更强的抗干扰性能和较高的跟踪精度.  相似文献   

7.
一种鲁棒的多特征融合目标跟踪新算法   总被引:3,自引:0,他引:3       下载免费PDF全文
仅利用单一的目标特征进行跟踪是大多数跟踪算法鲁棒性不高的重要原因。提出了一种新的多特征融合目标跟踪算法,该算法将目标的颜色、纹理、边缘、运动特征统一使用直方图模型进行描述,以降低算法受目标形变和部分遮挡的影响,在Auxiliary粒子滤波框架内将所有特征观测进行概率融合,以突出状态后验分布中目标真实状态对应的峰值,从而有效避免了复杂背景的干扰,并给出了一种有效的融合系数计算方法,使融合结果更加准确可靠。实验结果表明,该算法能同时处理刚性与非刚性目标的跟踪,较单一特征的跟踪算法具有明显的优势,对复杂背景下的跟踪具有较高的鲁棒性。与现有多特征融合算法的比较也证明了本文算法的有效性。  相似文献   

8.
针对视频监控中的多人跟踪问题,提出了一种基于高斯概率模型的算法.基于目标颜色的统计特征,采用改进的K均值方法,将目标区域按颜色信息聚类,并根据聚类结果对目标区域分块,然后用高斯模型对各分块分别进行建模.同时,对目标的位置信息也进行高斯建模.通过计算待检测目标与模型之间颜色和位置的最大联合概率值来实现跟踪.利用前后帧中目标的位置信息及颜色信息,能克服目标遮挡后因信息的丢失而跟踪失败的情况.实验结果表明,该算法具有较强的鲁棒性,能有效实现多人的跟踪.  相似文献   

9.
为解决目标在形变、遮挡和快速运动时所导致的跟踪失败,在经典TLD算法的框架下,使用尺度自适应均值偏移算法重新设计跟踪器,提出了MS-TLD算法.通过引入颜色直方图特征和尺度自适应,跟踪器能准确跟踪形变和快速运动的目标.设计跟踪-检测反馈机制,通过跟踪器和检测器相互校正,使新算法在目标被遮挡时具有很好的跟踪鲁棒性.采用TB-50标准测试集进行了实验验证与评测,结果表明所提出算法有效克服了由于目标形变、遮挡和快速运动以及背景干扰所导致的跟踪失败,比TLD等4种经典算法具有更好的跟踪准确性和鲁棒性.  相似文献   

10.
Vectorizing Cartoon Animations   总被引:1,自引:0,他引:1  
We present a system for vectorizing 2D raster format cartoon animations. The output animations are visually flicker free, smaller in file size, and easy to edit. We identify decorative lines separately from colored regions. We use an accurate and semantically meaningful image decomposition algorithm, supporting an arbitrary color model for each region. To ensure temporal coherence in the output, we reconstruct a universal background for all frames and separately extract foreground regions. Simple user-assistance is required to complete the background. Each region and decorative line is vectorized and stored together with their motions from frame to frame. The contributions of this paper are: 1) the new trapped-ball segmentation method, which is fast, supports nonuniformly colored regions, and allows robust region segmentation even in the presence of imperfectly linked region edges, 2) the separate handling of decorative lines as special objects during image decomposition, avoiding results containing multiple short, thin oversegmented regions, and 3) extraction of a single patch-based background for all frames, which provides a basis for consistent, flicker-free animations.  相似文献   

11.
基于统计模型和GVF-Snake的彩色目标检测与跟踪   总被引:3,自引:0,他引:3       下载免费PDF全文
为了能使传统监视系统具备目标自动检测与跟踪能力,提出了一种基于统计模型和GVF(gradient vectorflow)-Snake的彩色目标检测与跟踪算法.该算法可用于解决在静态背景下通过彩色视频信息来对运动目标进行自动检测与跟踪的问题,同时可直接给出目标轮廓的数学表示,并可简化后续目标识别算法的设计.该算法首先采用归一化RGB空间与灰度空间相结合的模型取代单一灰度模型来消除阴影对目标检测的影响;接着在此模型的基础上对差分图像进行GMM(Gaussian mixture model)建模,并构造运动边界图像,然后将静态图像轮廓提取算法GVF-Snake引入运动图像中,并通过修改能量项,使其能够跟踪运动目标的轮廓,最后针对Snake初始轮廓需要手工设定的问题,提出一种根据目标区域自动初始化轮廓的方法,为加快GVF-Snake的收敛速度,还采用一阶差分算法来预测下一时刻目标轮廓的位置.实验结果证明,该算法对刚性和非刚性两类目标都有较好的跟踪效果,可应用于智能监视和交通监控等领域.  相似文献   

12.
Object detection in a dynamic background is a challenging task in many computer vision applications. In some situations, the motion of objects can be predicted thanks to its regularity (e.g., vehicle motion, pedestrian motion). In this article, we propose to model such motion knowledge and to use it as additional information to help in foreground detection. The inclusion of object motion information provides a measure for distinguishing moving objects from a background that has similar sizes and brightness levels. This information is obtained by applying statistical methods on data obtained during the training period. When available, prior knowledge can be incorporated into the foreground detection process to improve robustness and to decrease false detection. We apply this framework to moving object detection in rivers, one of the situations in which classic background subtraction algorithms fail. Our experiments show that the incorporation of prior motion data into background subtraction improves object detection.  相似文献   

13.
This paper discusses about the new approach of multiple object tracking relative to background information. The concept of multiple object tracking through background learning is based upon the theory of relativity, that involves a frame of reference in spatial domain to localize and/or track any object. The field of multiple object tracking has seen a lot of research, but researchers have considered the background as redundant. However, in object tracking, the background plays a vital role and leads to definite improvement in the overall process of tracking. In the present work an algorithm is proposed for the multiple object tracking through background learning. The learning framework is based on graph embedding approach for localizing multiple objects. The graph utilizes the inherent capabilities of depth modelling that assist in prior to track occlusion avoidance among multiple objects. The proposed algorithm has been compared with the recent work available in literature on numerous performance evaluation measures. It is observed that our proposed algorithm gives better performance.  相似文献   

14.
张晓波  刘文耀 《传感技术学报》2007,20(10):2248-2252
提出一种将时域信息融入分水岭的视频分割新方法,以帧间变化检测为基础,通过运动边缘信息得到对象的初始模型,利用时域信息得到前景和背景的标识,结合提出的彩色多尺度形态学梯度算子进行分水岭分割,得到具有精确边界的视频对象,对慢变和快变的目标均有良好的效果,能够检测新出现的运动对象和现有对象的消失,能够定位和跟踪运动目标.继承了变化检测和分水岭算法速度快的优点,克服了两者易受噪声影响的缺点.  相似文献   

15.
严超  马利庄  沈洋 《软件学报》2009,20(Z1):221-230
提出了一种将前景物体从一个视频序列中分割出来的方法.提出了确信度模型,该模型由局部颜色配置信息求得.该算法首先对视频所有帧做了分水岭预处理,然后对关键帧做了图分割处理.接下来经由双向过程计算确信度.具体为先通过正向过程计算确信度,再通过逆向过程并辅以光流算法对小部分确信度进行修正,最后根据确信度进行前背景标注.该确信度模型对前背景较为相似的视频有良好的分割效果,并且由于通过双向过程进行分割,对视频中部分遮挡物体的分割也有不错的改善.  相似文献   

16.
Background modeling is widely used in visual surveillance systems aiming to facilitate analysis of real-world video scenes. The goal is to discriminate between pixels from foreground objects and those ones from the background. However, real-world scenarios tend to have time and spatial non-stationary variations, being difficult to reveal the foreground and background entities from video data. Here, we propose a novel adaptive background modeling, termed Object-based Selective Updating with Correntropy (OSUC), to support video-based surveillance systems. Our approach that is developed within an adaptive learning framework unveils existing spatio-temporal pixel relationships, making use of a single Gaussian for the model representation stage. Moreover, we introduce a background updating scheme composed of an updating rule that is based on the stochastic gradient algorithm and Correntropy cost function. As a result, this scheme can extract the temporal statistical pixel distribution, at the same time, dealing with non-stationary pixel value fluctuations that affect the background model. Here, an automatic tuning strategy of the cost function bandwidth parameter is carried out that can handle both Gaussian and non-Gaussian noise environments. Besides, to include pixel spatial relationships in the background modeling processing, we introduce an object-based selective learning rate strategy for enhancing the background modeling accuracy. Particularly, an object motion analysis stage is presented to detect and track foreground entities based on pixel intensities and motion direction attained via optical flow computation. Testing is provided on well-known datasets for discriminating between foreground and background that include stationary and non-stationary behaviors. Achieved results show that the OSUC outperforms, in most of the considered cases, the-state-of-the-art approaches with an affordable computational cost. Therefore, the proposed approach is suitable for supporting real-world video-based surveillance systems.  相似文献   

17.
Motion detection with nonstationary background   总被引:4,自引:0,他引:4  
Abstract. This paper proposes a new background subtraction method for detecting moving foreground objects from a nonstationary background. While background subtraction has traditionally worked well for a stationary background, the same cannot be implied for a nonstationary viewing sensor. To a limited extent, motion compensation for the nonstationary background can be applied. However, in practice, it is difficult to realize the motion compensation to sufficient pixel accuracy, and the traditional background subtraction algorithm will fail for a moving scene. The problem is further complicated when the moving target to be detected/tracked is small, since the pixel error in motion that is compensating the background will subsume the small target. A spatial distribution of Gaussians (SDG) model is proposed to deal with moving object detection having motion compensation that is only approximately extracted. The distribution of each background pixel is temporally and spatially modeled. Based on this statistical model, a pixel in the current frame is then classified as belonging to the foreground or background. For this system to perform under lighting and environmental changes over an extended period of time, the background distribution must be updated with each incoming frame. A new background restoration and adaptation algorithm is developed for the nonstationary background. Test cases involving the detection of small moving objects within a highly textured background and with a pan-tilt tracking system are demonstrated successfully. Received: 30 July 2001 / Accepted: 20 April 2002 Correspondence to: Chin-Seng Chau  相似文献   

18.
论文提出了一种工作于MPEG压缩域的快速运动目标提取算法。算法以通过部分解码得到的运动向量和亮度分量的直流DCT系数作为输入,提取P帧的运动目标。首先采用鲁棒性回归分析估计全局运动,标记出与全局运动不一致的宏块,得到运动块的分布;然后将运动向量场插值作为时间域的特征,将重构的直流图像转换到LUV颜色空间作为空间域的特征,采用快速平均移聚类找到时间和空间特征具有相似性的区域,得到细化的区域边界;最后结合运动块分布和聚类分析的结果,通过基于马尔可夫随机场的统计标号方法进行背景分离,得到运动目标的掩模。实验结果表明该算法可以有效地消除运动向量噪声的影响,并有很高的处理速度,对于CIF格式的视频码流,每秒可以处理约50帧。  相似文献   

19.
基于运动估计的Kalman滤波视频对象跟踪   总被引:1,自引:0,他引:1  
提出了一种利用Kalman滤波对运动目标的形心进行预测,从而实现视频对象跟踪的算法。首先进行视频对象分割,求出运动目标的形心。再利用视频序列中连续两帧的形心和运动矢量信息,用Kalman滤波对运动目标的形心在下一帧的位置进行预测,从而快速、有效地自动跟踪多个目标对象。实验结果表明,该算法对运动目标的出现和消失,以及非刚性物体的尺度变化和变形,具有较强的鲁棒性。  相似文献   

20.
This paper explores a robust region-based general framework for discriminating between background and foreground objects within a complex video sequence. The proposed framework works under difficult conditions such as dynamic background and nominally moving camera. The originality of this work lies essentially in our use of the semantic information provided by the regions while simultaneously identifying novel objects (foreground) and non-novel ones (background). The information of background regions is exploited to make moving objects detection more efficient, and vice-versa. In fact, an initial panoramic background is modeled using region-based mosaicing in order to be sufficiently robust to noise from lighting effects and shadowing by foreground objects. After the elimination of the camera movement using motion compensation, the resulting panoramic image should essentially contain the background and the ghost-like traces of the moving objects. Then, while comparing the panoramic image of the background with the individual frames, a simple median-based background subtraction permits a rough identification of foreground objects. Joint background-foreground validation, based on region segmentation, is then used for a further examination of individual foreground pixels intended to eliminate false positives and to localize shadow effects. Thus, we first obtain a foreground mask from a slow-adapting algorithm, and then validate foreground pixels (moving visual objects + shadows) by a simple moving object model built by using both background and foreground regions. The tests realized on various well-known challenging real videos (across a variety of domains) show clearly the robustness of the suggested solution. This solution, which is relatively computationally inexpensive, can be used under difficult conditions such as dynamic background, nominally moving camera and shadows. In addition to the visual evaluation, spatial-based evaluation statistics, given hand-labeled ground truth, has been used as a performance measure of moving visual objects detection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号