首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
王典  程咏梅  杨涛  潘泉  赵春晖 《计算机应用》2006,26(5):1021-1023
复杂场景的背景建模、运动目标检测、运动目标所投射阴影的检测与抑制在智能监控、机器人视觉、视频会议等领域有着广泛的应用。在运动前景检测阶段,给出了一种改进的混合高斯算法进行场景的背景建模,根据各点像素值出现的混乱程度采取不同的高斯函数参数更新机制,缓解了混合高斯算法计算量大的问题。在运动目标的阴影检测与抑制中,提出了一种基于混合高斯的阴影抑制算法,该算法先利用阴影在HSV颜色空间的特点,判断被检测为运动前景的像素是否为疑似阴影,然后用混合高斯阴影模型对所有疑似阴影值进行聚类,进一步完成阴影抑制。仿真结果表明:该算法可更有效地抑制阴影对运动目标检测的影响,并具有较强的实时性。  相似文献   

2.
针对传统的基于高斯混合模型(GMM)的运动目标检测算法抗噪声性能差、易受动态背景干扰的缺陷,提出一种高斯混合建模与超像素马尔科夫随机场(MRF)相结合的运动目标检测方法。采用GMM对视频图像进行建模,初步标记出前景目标区域;对原始图像进行超像素分解,并根据GMM提取的前景图像得到概率超像素图像;采用MRF建模对概率超像素图像建模得到最终的运动目标前景图像。通过实验对比分析,表明提出的算法对噪声干扰、动态扰动背景等复杂场景均可以得到优于传统算法的结果。  相似文献   

3.
针对移动镜头下的运动目标检测中的背景建模复杂、计算量大等问题,提出一种基于运动显著性的移动镜头下的运动目标检测方法,在避免复杂的背景建模的同时实现准确的运动目标检测。该方法通过模拟人类视觉系统的注意机制,分析相机平动时场景中背景和前景的运动特点,计算视频场景的显著性,实现动态场景中运动目标检测。首先,采用光流法提取目标的运动特征,用二维高斯卷积方法抑制背景的运动纹理;然后采用直方图统计衡量运动特征的全局显著性,根据得到的运动显著图提取前景与背景的颜色信息;最后,结合贝叶斯方法对运动显著图进行处理,得到显著运动目标。通用数据库视频上的实验结果表明,所提方法能够在抑制背景运动噪声的同时,突出并准确地检测出场景中的运动目标。  相似文献   

4.
一种改进的复杂场景运动目标检测算法   总被引:3,自引:2,他引:1  
提出了一种复杂场景视频序列中运动目标精确检测及提取的改进算法,该算法首先采用混合高斯模型(简称GMM)对背景及前景建模快速地实现前景运动区域提取,然后结合目标帧间相关性和随机噪声帧间无关的特点采用时间滤波(Tem-poral Filter)法和数学形态学进行后处理.实验结果表明本文所采用的改进算法能准确的提取运动目标滤除动态噪声,提高了检测鲁棒性,对复杂干扰场景下的实时运动目标检测得到了较令人满意的效果.  相似文献   

5.
针对无人机场景下运动目标检测对实时性要求高,运动背景、环境光照易变化等问题,提出一种结合单高斯与光流法的运动目标检测算法.首先,对运动相机捕捉的图像采用改进的单高斯模型进行背景建模,并融合前一帧图像的多个高斯模型来进行运动补偿,然后,将得到的前景图像作为掩模来提取特征点和进行光流跟踪,并对稀疏特征点的运动轨迹进行层次聚类.实验结果表明,该算法能有效地处理运动相机造成的前景对背景模型的干扰,背景建模速度快,对光照变化不敏感,检测出的目标接近真实目标.  相似文献   

6.
融合高斯混合模型和小波变换的运动目标检测   总被引:2,自引:1,他引:1       下载免费PDF全文
当前景目标与背景在颜色上接近时,仅采用高斯混合模型进行目标检测容易导致误判。为了提高模型分割算法的鲁棒性,提出一种融合高斯混合模型和小波变换的运动目标检测算法。通过小波变换提取图像的纹理特征信息,利用高斯混合模型拟合背景信息。将两者融合起来,把纹理信息作为颜色信息的补偿,保证了模型在线更新背景信息时模型的稳定性和收敛性,同时弥补了目标分割中前景与背景颜色信息接近时容易导致误判的不足。实验结果表明,本文方法比经典高斯混合模型方法具有较高的分割精度。  相似文献   

7.
目标跟踪是计算机视觉和图像处理的一个重点课题,在视频监控、机器人视觉导航以及智能交通控制中具有广泛的应用前景.通过粒子滤波技术,研究了如何整合颜色特征、前景信息和积分图运算等技术实现视频目标跟踪的粒子滤波算法.在对目标进行分割中采用了混合高斯背景建模方法;同时结合积分直方图的计算方法对颜色特征进行分段统计及相互遮挡的判断,实现基于粒子滤波的目标跟踪算法的优化,解决跟踪中诸如遮挡、光照变化、背景干扰、尺寸变化等难以解决的问题.实验结果表明提出的方法达到了预期目标.  相似文献   

8.
高斯混合模型已经成为对视频利用背景减除法进行运动目标检测的最多的一种背景建模模型,也成为一种标准模型。首先对高斯混合模型的理论框架及其性能进行了分析,分析了高斯混合模型仍需要解决的问题,并提出一种高斯混合模型联合多特征的运动目标检测算法,实验表明该算法具有较好的目标检测效果以及环境自适应性。  相似文献   

9.
在运动目标检测过程中,背景建模对目标提取至关重要,而混合高斯模型是目前背景建模中较流行的方法之一。针对混合高斯模型中存在的不足做了两点改进:(1)混合高斯模型是对各点孤立建模,对于拥有较高的分辨率的图像运算量较大,引入分块建模思想,可以明显提高目标检测的速率而且考虑到像素点之间的空域信息;(2)混合高斯模型对运动目标停留在场景中某一位置停留过长时,会出现将前景转化成背景,以致于产生目标在场景中消失的现象,根据目标在场景中运动与静止的情况,决定是整帧更新还是只更新背景区域。通过实验可以得出,该算法在不影响识别的情况下可以显著地提高运动目标的检测速率,而且可以减少部分噪声,另外也能有效地克服目标转化为背景的情况,从而保持了运动目标出现的连续性。  相似文献   

10.
介绍了在混合高斯模型的基础上,采用每一个像素点及其邻域组成的集合作为特征矢量来描述图像,对YUV格式的彩色图像的不同颜色分量分别建立混合高斯模型,从而确定是否有变化发生.为充分利用空间信息,提出将彩色图像分割与背景建模结合起来,得到具有精确边缘的运动目标.实验结果表明,即使在前景纹理、颜色比较一致且与背景对比不是很明显的情况下,本方法也能完整地检测出运动前景.  相似文献   

11.
To track objects in video sequences, many studies have been done to characterize the target with respect to its color distribution. Most often, the Gaussian mixture model (GMM) is used to represent the object color density. In this paper, we propose to extend the normality assumption to more general families of distributions issued from the Pearson’s system. Precisely, we propose a method called Pearson mixture model (PMM), used in conjunction with Gaussian copula, which is dynamically updated to adapt itself to the appearance change of the object during the sequence. This model is combined with Kalman filtering to predict the position of the object in the next frame. Experimental results on gray-level and color video sequences show tracking improvements compared to classical GMM. Especially, the PMM seems robust to illumination variations, pose and scale changes, and also to partial occlusions, but its computing time is higher than the computing time of GMM.  相似文献   

12.
GMM在目标检测过程中容易受到灯光、目标颜色与背景颜色相似、目标阴影和拍摄高度等因素的干扰。针对以上问题,本文提出一种结合改进HED网络和OTSU双阈值分割的GMM算法。首先,改进模型针对视频帧的背景、噪声、前景目标进行双阈值分割,合理选取高斯模型个数。其次,利用HED网络对输入图片进行边缘检测,将HED网络检测的边缘结果和双阈值分割的GMM检测结果进行“与”运算,得到最终目标检测结果。通过实验验证,改进算法的检测率更高,目标较小时检测轮廓更加完整,检测效果更好。  相似文献   

13.
Traffic surveillance is an important issue in intelligent transportation systems. Efficient and accurate vehicle detection is one challenging problem for complex urban traffic surveillance. As such, this paper proposes a new vehicle detection method using spatial relationship GMM for daytime and nighttime based on a high-resolution camera. First, the vehicle is treated as an object composed of multiple components, including the license plate, rear lamps and headlights. These components are localized using their distinctive color, texture, and region feature. Deriving plate color converting model, plate hypothesis score calculation and cascade plate refining were accomplished for license plate localization. Multi-threshold segmentation and connected component analysis are accomplished for rear lamps localization. Frame difference and geometric features similarity analysis are accomplished for headlights localization. After that, the detected components are taken to construct the spatial relationship using GMM. Finally, similar probability measures of the model and the GMM, including GMM of plate and rear lamp, GMM of both rear lamps and GMM of both headlights are adopted to localize vehicle. Experiments in practical urban scenarios are carried out under daytime and nighttime. It can be shown that our method can adapt to the partial occlusion and various lighting conditions well, meanwhile it has a fast detection speed.  相似文献   

14.
Inappropriate lighting is often responsible for poor quality video. In most offices and homes, lighting is not designed for video conferencing. This can result in unevenly lit faces, distracting shadows, and unnatural colors. We present a method for relighting faces that reduces the effects of uneven lighting and color. Our setup consists of a compact lighting rig and a camera that is both inexpensive and inconspicuous to the user. We use unperceivable infrared (IR) lights to obtain an illumination bases of the scene. Our algorithm computes an optimally weighted combination of IR bases to minimize lighting inconsistencies in foreground areas and reduce the effects of colored monitor light. However, IR relighting alone results in images with an unnatural ghostly appearance, thus a retargeting technique is presented which removes the unnatural IR effects and produces videos that have substantially more balanced intensity and color than the original video.  相似文献   

15.
Fast occluded object tracking by a robust appearance filter   总被引:10,自引:0,他引:10  
We propose a new method for object tracking in image sequences using template matching. To update the template, appearance features are smoothed temporally by robust Kalman filters, one to each pixel. The resistance of the resulting template to partial occlusions enables the accurate detection and handling of more severe occlusions. Abrupt changes of lighting conditions can also be handled, especially when photometric invariant color features are used, The method has only a few parameters and is computationally fast enough to track objects in real time.  相似文献   

16.
Abstract— The perceived colors of an image seen on a self‐luminous display are affected by ambient illumination. The ambient light reflected from the display faceplate is mixed with the image‐forming light emitted by the display. In addition to this direct physical effect of viewing flare, ambient illumination causes perceptual changes by affecting the adaptation state of the viewer's visual system. This paper first discusses these effects and how they can be compensated, outlining a display system able to adjust its output based on prevailing lighting conditions. The emphasis is on compensating for the perceptual effects of viewing conditions by means of color‐appearance modeling. The effects of varying the degree of chromatic adaptation parameter D and the surround compensation parameters c and Nc of the CIECAM97s color‐appearance model were studied in psychophysical experiments. In these memory‐based paired comparison experiments, the observers judged the appearance of images shown on an LCD under three different ambient‐illumination conditions. The dependence of the optimal parameter values on the level of ambient illumination was evident. The results of the final experiment, using a category scaling technique, showed the benefit of using the color‐appearance model with the optimized parameters in compensating for the perceptual changes caused by varying ambient illumination.  相似文献   

17.
This paper presents an automated approach to recovering the true color of objects on the seafloor in images collected from multiple perspectives by an autonomous underwater vehicle (AUV) during the construction of three‐dimensional (3D) seafloor models and image mosaics. When capturing images underwater, the water column induces several effects on light that are typically negligible in air, such as color‐dependent attenuation and backscatter. AUVs must typically carry artificial lighting when operating at depths below 20‐30 m; the lighting pattern generated is usually not spatially consistent. These effects cause problems for human interpretation of images, limit the ability of using color to identify benthic biota or quantify changes over multiple dives, and confound computer‐based techniques for clustering and classification. Our approach exploits the 3D structure of the scene generated using structure‐from‐motion and photogrammetry techniques to provide basic spatial data to an underwater image formation model. Parameters that are dependent on the properties of the water column are estimated from the image data itself, rather than using fixed in situ infrastructure, such as reflectance panels or detailed data on water constitutes. The model accounts for distance‐based attenuation and backscatter, camera vignetting and the artificial lighting pattern, recovering measurements of the true color (reflectance) and thus allows us to approximate the appearance of the scene as if imaged in air and illuminated from above. Our method is validated against known color targets using imagery collected in different underwater environments by two AUVs that are routinely used as part of a benthic habitat monitoring program.  相似文献   

18.
The appearance of an object greatly changes under different lighting conditions. Even so, previous studies have demonstrated that the appearance of an object under varying illumination conditions can be represented by a linear subspace. A set of basis images spanning such a linear subspace can be obtained by applying the principal component analysis (PCA) for a large number of images taken under different lighting conditions. Since little is known about how to sample the appearance of an object in order to correctly obtain its basis images, it was a common practice to use as many input images as possible. In this study, we present a novel method for analytically obtaining a set of basis images of an object for varying illumination from input images of the object taken properly under a set of light sources, such as point light sources or extended light sources. Our proposed method incorporates the sampling theorem of spherical harmonics for determining a set of lighting directions to efficiently sample the appearance of an object. We further consider the issue of aliasing caused by insufficient sampling of the object's appearance. In particular, we investigate the effectiveness of using extended light sources for modeling the appearance of an object under varying illumination without suffering the aliasing caused by insufficient sampling of its appearance.  相似文献   

19.
Image Appearance Exploration by Model-Based Navigation   总被引:1,自引:0,他引:1  
Changing the appearance of an image can be a complex and non-intuitive task. Many times the target image colors and look are only known vaguely and many trials are needed to reach the desired results. Moreover, the effect of a specific change on an image is difficult to envision, since one must take into account spatial image considerations along with the color constraints. Tools provided today by image processing applications can become highly technical and non-intuitive including various gauges and knobs.
In this paper we introduce a method for changing image appearance by navigation, focusing on recoloring images. The user visually navigates a high dimensional space of possible color manipulations of an image. He can either explore in it for inspiration or refine his choices by navigating into sub regions of this space to a specific goal. This navigation is enabled by modeling the chroma channels of an image's colors using a Gaussian Mixture Model (GMM). The Gaussians model both color and spatial image coordinates, and provide a high dimensional parameterization space of a rich variety of color manipulations. The user's actions are translated into transformations of the parameters of the model, which recolor the image. This approach provides both inspiration and intuitive navigation in the complex space of image color manipulations.  相似文献   

20.
提出一种综合运用文字边缘特征、颜色信息以及视频时空特性的字幕提取方法。通过边缘检测获取字幕位置进而得到文字颜色,采用全局混合高斯模型对颜色建模,建模完成后直接利用模型从视频文字变化帧中提取文字颜色层。在判断字幕是否变化时,提出了"与"掩码图的方法。实验结果表明,对于复杂背景下包含1~2种颜色字幕颜色的视频,该方法具有良好的提取效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号