首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
针对复杂环境下的目标检测问题,提出了一种基于背景模型的融合检测方法。首先在多模式均值模型的基础上,构造多模式均值时空模型,结合像素在时空域上的分布信息,改善了模型对非平稳场景较为敏感的缺点,给出了模型更新方法和前景检测方法;然后利用该模型对可见光和红外图像序列分别进行建模和前景检测,给出了一种基于置信度的目标融合检测方法,利用双传感器信息提高检测精度和可靠性。实验结果验证了本文方法的有效性。  相似文献   

2.
基于时空分布的混合高斯背景建模改进方法   总被引:1,自引:0,他引:1  
针对传统的混合高斯模型对动态背景敏感、缓变目标检测不准确等问题,提出了一种基于时空分布的混合高斯建模改进方法。该方法的基本思想是混合高斯背景基于时间分布信息建模的同时,通过随机数生成方法对邻域进行采样,完成像素空间分布的背景建模;同时利用像素历史统计信息和决策融合机制的前景检测方法,实现对静止目标判定以及前景运动目标更精确的提取。最后,将此算法与其他前景检测方法进行对比实验,表明了该算法对动态背景鲁棒性强、缓变目标检测准确的结论。  相似文献   

3.
陈真  王钊 《计算机系统应用》2013,22(9):180-184,159
传统混合高斯背景模型(Gaussian mixture model, GMM)不能快速适应动态场景中背景发生突变的情况. 本文提出一种基于元认知模型的智能混合高斯背景建模方法, 每个输入像素经过元认知监控成分刺激元认知体验成分以提取成功(或失败)的意识进行认知, 根据提取的意识及时向元认知知识成分传输新的认知知识或直接提取元认知知识成分, 并作出决策信息. 该方法能够对背景模型产生认知, 当背景突变为认知过的背景时, 可以快速适应并能更准确地描述复杂场景中的真实背景.  相似文献   

4.
通过图像对交通事故进行识别、处理,需要将卷入交通事故的车辆等目标从交通事故场景图像中分离出来,因此建立准确的道路模型十分重要.传统的道路建模方法大都基于灰度空间,忽略了图像中的彩色信息,不适合复杂的交通环境.针对道路像素分布的非参数化特点和后续处理中对彩色信息的需求,提出一种基于色彩空间聚类的非参数化道路模型,将道路模型的建立抽象为时间轴上的色彩空间聚类过程.根据颜色变化理论,设置聚类区间为RGB颜色空间中以聚类中心和原点连线为轴的圆柱体,并通过聚类中心特征值的不同自适应来调整聚类半径.同时,对于每个像素位置,根据场景复杂程度和变化频率的不同自适应地选取聚类中心数目,获得较为准确的背景模型,在提高检测精度的同时也保证了检测效率.  相似文献   

5.
基于改进高斯混合模型的前景检测   总被引:1,自引:0,他引:1       下载免费PDF全文
针对自适应混合高斯背景模型执行速度慢、检测前景时容易产生“鬼影”等问题,提出一种改进的混合高斯背景建模方法。该方法通过对高斯分布权值和生存时间的限制,建立高斯分布退出机制,使模型能根据场景自适应选择每个像素的高斯分布个数,从而去除多余高斯分布,加快算法执行速度。在模型更新过程中,通过融入帧间差分,将每帧图像分成运动像素、背景像素及非真实运动像素,并通过对非真实运动像素赋予较大学习率来加速移出背景的恢复,从而避免“鬼影”和拖影现象。实验结果表明,与传统检测方法相比,该方法可以获得更好的目标检测效果。  相似文献   

6.
基于图像对交通事故进行识别、处理,需要将卷入交通事故的车辆等目标从交通事故场景图像中分离出来,因此建立准确的道路模型十分 重要。传统的道路建模方法大都基于灰度空间,忽略了图像中的彩色信息,不适合复杂的交通环境。本文将针对道路像素分布的非参数化特点和后续处理中对彩色信息的需求,提出一种基于空间聚类的非参数化道路实时模型的建立方法,将道路模型的建立抽象为时间轴上的空间聚类过程。根据颜色畸变理论,设置聚类区间为RGB颜色空间中以聚类中心和原点连线为轴的圆柱体,并通过聚类中心特征值的不同自适应来调整聚类半径。同时,对于每个像素位置,根据场景复杂程度和变化频率的不同自适应的选取聚类中心数目,获得较为准确的背景模型,在提高检测精度的同时也保证了检测效率。  相似文献   

7.
一种基于主成分分析的 Codebook 背景建模算法   总被引:10,自引:2,他引:8  
混合高斯(Mixture of Gaussian, MOG)背景建模算法和Codebook背景建模算法被广泛应用于监控视频的运动目标检测问题,但 混合高斯的球体模型通常假设RGB三个分量是独立的, Codebook的圆柱体模型假设背景像素值在圆柱体内均匀分布且背景亮度值变化方向指向坐标原点,这 些假设使得模型对背景的描述能力下降. 本文提出了一种椭球体背景模型,该模型克服了混合高斯球体模型和Codebook圆柱体模型假设的局限 性,同时利用主成分分析(Principal components analysis, PCA)方法来刻画椭球体背景模型, 提出了一种基于主成分分析的Codebook背景建模算法.实验表明,本文算法不仅能够更准确地描述背 景像素值在RGB空间中的分布特征,而且具有良好的鲁棒性.  相似文献   

8.
现有的背景建模方法通常只利用像素的时间或空间信息进行建模,降低了运动目标检测的准确性,针对这一问题提出一种融合像素时空信息的背景建模方法.分别在视频图像序列的时间、空间维度上对像素灰度值进行采样,建立像素的时间和空间背景模型;在检测运动目标的过程中对时间背景模型采用“先进先出”的更新策略,对空间背景模型采用随机的更新策略.实验结果表明,时空背景建模能有效地检测出运动目标,有效减少光线变化和摄像机抖动对检测结果的影响,较好抑制动态背景的干扰.  相似文献   

9.
基于记忆的混合高斯背景建模   总被引:1,自引:0,他引:1  
齐玉娟  王延江  李永平 《自动化学报》2010,36(11):1520-1526
混合高斯模型(Gaussian mixture model, GMM)可对存在渐变及重复性运动的场景进行建模, 被认为是最好的背景模型之一. 然而, 它不能解决场景中存在的突变, 如门的打开/关闭等. 为解决此类问题, 受人类认知环境方式的启发, 本文将人类记忆机制引入到背景建模, 提出一种基于记忆的混合高斯模型(Memory-based GMM, MGMM). 每个像素都要经过瞬时记忆、短时记忆和长时记忆三个空间的传输和处理. 本文提出的基于记忆的背景模型能够记住曾经出现的背景, 从而能更快地适应场景的变化.  相似文献   

10.
李伟  陈临强  殷伟良 《计算机工程》2011,37(15):187-189
针对高斯混合模型中均值和方差的学习,提出基于自适应学习率的背景建模方法。统计每个像素模型被匹配的次数,在线更新学习率。在初始化背景时,分配一个全局的学习率,采用传统高斯混合模型的学习方式;在更新背景时,为每个像素分配一个学习率,采用自适应的学习方式。实验结果表明,该方法与传统高斯混合背景模型相比,有较好的学习能力与稳定性,能提高运动目标检测的正确率。  相似文献   

11.
In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) innovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data.  相似文献   

12.
In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) innovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data.  相似文献   

13.
一种分步的融合时空信息的背景建模   总被引:1,自引:0,他引:1  
自然场景中的光照突变和树枝、水面等不规则运动是背景建模的主要困难. 针对该问题,提出一种分步的融合时域信息和空域信息的背景建模方法. 在时域,采用具有光照不变性的颜色空间表征时域信息,并提出对噪声和光照突变具有较好适应性的码字聚类准则和自适应背景更新策略,构造了对噪声和光照突变具有较好适应性的时域信息背景模型. 在空域,通过采样将测试序列图像分成两幅子图,而后利用时域模型检测其中一幅子图,并将检测结果作为另一幅子图的先验信息,同时采用马尔科夫随机场(Markov random field,MRF)对其加以约束,最终检测其状态. 在多个测试视频序列上的实验结果表明,本文背景模型对于自然场景中的光照突变和不规则运动具有较好的适应性.  相似文献   

14.
Salient Region Detection by Modeling Distributions of Color and Orientation   总被引:3,自引:0,他引:3  
We present a robust salient region detection framework based on the color and orientation distribution in images. The proposed framework consists of a color saliency framework and an orientation saliency framework. The color saliency framework detects salient regions based on the spatial distribution of the component colors in the image space and their remoteness in the color space. The dominant hues in the image are used to initialize an expectation-maximization (EM) algorithm to fit a Gaussian mixture model in the hue-saturation (H-S) space. The mixture of Gaussians framework in H-S space is used to compute the inter-cluster distance in the H-S domain as well as the relative spread among the corresponding colors in the spatial domain. Orientation saliency framework detects salient regions in images based on the global and local behavior of different orientations in the image. The oriented spectral information from the Fourier transform of the local patches in the image is used to obtain the local orientation histogram of the image. Salient regions are further detected by identifying spatially confined orientations and with the local patches that possess high orientation entropy contrast. The final saliency map is selected as either color saliency map or orientation saliency map by automatically identifying which of the maps leads to the correct identification of the salient region. The experiments are carried out on a large image database annotated with ldquoground-truthrdquo salient regions, provided by Microsoft Research Asia, which enables us to conduct robust objective level comparisons with other salient region detection algorithms.  相似文献   

15.
Research on noise robust speech recognition has mainly focused on dealing with relatively stationary noise that may differ from the noise conditions in most living environments. In this paper, we introduce a recognition system that can recognize speech in the presence of multiple rapidly time-varying noise sources as found in a typical family living room. To deal with such severe noise conditions, our recognition system exploits all available information about speech and noise; that is spatial (directional), spectral and temporal information. This is realized with a model-based speech enhancement pre-processor, which consists of two complementary elements, a multi-channel speech–noise separation method that exploits spatial and spectral information, followed by a single channel enhancement algorithm that uses the long-term temporal characteristics of speech obtained from clean speech examples. Moreover, to compensate for any mismatch that may remain between the enhanced speech and the acoustic model, our system employs an adaptation technique that combines conventional maximum likelihood linear regression with the dynamic adaptive compensation of the variance of the Gaussians of the acoustic model. Our proposed system approaches human performance levels by greatly improving the audible quality of speech and substantially improving the keyword recognition accuracy.  相似文献   

16.
混合高斯模型的自适应前景提取   总被引:2,自引:2,他引:0       下载免费PDF全文
复杂场景下的运动前景提取是计算机视觉研究领域的研究重点。为解决复杂场景中的前景目标提取问题,本文提出一种应用于复杂变化场景中的基于混合高斯模型的自适应前景提取方法。本方法可以对视频帧中每个像素的高斯分布数进行动态控制,并且通过在线EM算法对高斯分布的各参数进行学习,此外每个像素的权值更新速率可根据策略进行调整。实验结果表明本方法对复杂变化场景具有较好的适应性,可有效、快速地提取前景目标,提取结果具有较好的查准率和查全率。  相似文献   

17.
混合高斯模型和帧间差分相融合的自适应背景模型   总被引:12,自引:2,他引:10       下载免费PDF全文
提出了运动目标检测中背景动态建模的一种方法。该方法是在Stauffer等人提出的自适应混合高斯背景模型基础上,为每个像素构建混合高斯背景模型,通过融入帧间差分把每帧中的图像区分为背景区域、背景显露区域和运动物体区域。相对于背景区域,背景显露区中的像素点将以大的更新率更新背景模型,使得长时间停滞物体由背景变成运动前景时,被遮挡的背景显露区被快速恢复。与Stauffer等人提出的方法不同的是,物体运动区不再构建新的高斯分布加入到混合高斯分布模型中,减弱了慢速运动物体对背景的影响。实验结果表明,在有诸多不确定性因素的序列视频中构建的背景有较好的自适应性,能迅速响应实际场景的变化。  相似文献   

18.
提出一种鲁棒自适应表面模型,该模型中每个像素值的变化过程由一混合高斯分布描述.为了适应目标表面的变化,这些高斯参数在跟踪期间通过在线的EM算法自适应更新;在估计目标状态时。采用了粒子滤波算法。设计了基于自适应表面模型的观测模型;在处理遮挡时,采用了一种鲁棒估计技术.多组试验结果表明,该算法对光照变化、姿态变化、部分或完全遮挡下的跟踪具有较强的鲁棒性.  相似文献   

19.
We propose an on-line algorithm to segment foreground from background in videos captured by a moving camera. In our algorithm, temporal model propagation and spatial model composition are combined to generate foreground and background models, and likelihood maps are computed based on the models. After that, an energy minimization technique is applied to the likelihood maps for segmentation. In the temporal step, block-wise models are transferred from the previous frame using motion information, and pixel-wise foreground/background likelihoods and labels in the current frame are estimated using the models. In the spatial step, another block-wise foreground/background models are constructed based on the models and labels given by the temporal step, and the corresponding per-pixel likelihoods are also generated. A graph-cut algorithm performs segmentation based on the foreground/background likelihood maps, and the segmentation result is employed to update the motion of each segment in a block; the temporal model propagation and the spatial model composition step are re-evaluated based on the updated motions, by which the iterative procedure is implemented. We tested our framework with various challenging videos involving large camera and object motions, significant background changes and clutters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号