共查询到20条相似文献,搜索用时 93 毫秒
1.
复杂场景的背景建模、运动目标检测、运动目标所投射阴影的检测与抑制在智能监控、机器人视觉、视频会议等领域有着广泛的应用。在运动前景检测阶段,给出了一种改进的混合高斯算法进行场景的背景建模,根据各点像素值出现的混乱程度采取不同的高斯函数参数更新机制,缓解了混合高斯算法计算量大的问题。在运动目标的阴影检测与抑制中,提出了一种基于混合高斯的阴影抑制算法,该算法先利用阴影在HSV颜色空间的特点,判断被检测为运动前景的像素是否为疑似阴影,然后用混合高斯阴影模型对所有疑似阴影值进行聚类,进一步完成阴影抑制。仿真结果表明:该算法可更有效地抑制阴影对运动目标检测的影响,并具有较强的实时性。 相似文献
2.
3.
针对移动镜头下的运动目标检测中的背景建模复杂、计算量大等问题,提出一种基于运动显著性的移动镜头下的运动目标检测方法,在避免复杂的背景建模的同时实现准确的运动目标检测。该方法通过模拟人类视觉系统的注意机制,分析相机平动时场景中背景和前景的运动特点,计算视频场景的显著性,实现动态场景中运动目标检测。首先,采用光流法提取目标的运动特征,用二维高斯卷积方法抑制背景的运动纹理;然后采用直方图统计衡量运动特征的全局显著性,根据得到的运动显著图提取前景与背景的颜色信息;最后,结合贝叶斯方法对运动显著图进行处理,得到显著运动目标。通用数据库视频上的实验结果表明,所提方法能够在抑制背景运动噪声的同时,突出并准确地检测出场景中的运动目标。 相似文献
4.
5.
6.
当前景目标与背景在颜色上接近时,仅采用高斯混合模型进行目标检测容易导致误判。为了提高模型分割算法的鲁棒性,提出一种融合高斯混合模型和小波变换的运动目标检测算法。通过小波变换提取图像的纹理特征信息,利用高斯混合模型拟合背景信息。将两者融合起来,把纹理信息作为颜色信息的补偿,保证了模型在线更新背景信息时模型的稳定性和收敛性,同时弥补了目标分割中前景与背景颜色信息接近时容易导致误判的不足。实验结果表明,本文方法比经典高斯混合模型方法具有较高的分割精度。 相似文献
7.
8.
高斯混合模型已经成为对视频利用背景减除法进行运动目标检测的最多的一种背景建模模型,也成为一种标准模型。首先对高斯混合模型的理论框架及其性能进行了分析,分析了高斯混合模型仍需要解决的问题,并提出一种高斯混合模型联合多特征的运动目标检测算法,实验表明该算法具有较好的目标检测效果以及环境自适应性。 相似文献
9.
在运动目标检测过程中,背景建模对目标提取至关重要,而混合高斯模型是目前背景建模中较流行的方法之一。针对混合高斯模型中存在的不足做了两点改进:(1)混合高斯模型是对各点孤立建模,对于拥有较高的分辨率的图像运算量较大,引入分块建模思想,可以明显提高目标检测的速率而且考虑到像素点之间的空域信息;(2)混合高斯模型对运动目标停留在场景中某一位置停留过长时,会出现将前景转化成背景,以致于产生目标在场景中消失的现象,根据目标在场景中运动与静止的情况,决定是整帧更新还是只更新背景区域。通过实验可以得出,该算法在不影响识别的情况下可以显著地提高运动目标的检测速率,而且可以减少部分噪声,另外也能有效地克服目标转化为背景的情况,从而保持了运动目标出现的连续性。 相似文献
10.
11.
W. Ketchantang S. Derrode L. Martin S. Bourennane 《Machine Vision and Applications》2008,19(5-6):457-466
To track objects in video sequences, many studies have been done to characterize the target with respect to its color distribution. Most often, the Gaussian mixture model (GMM) is used to represent the object color density. In this paper, we propose to extend the normality assumption to more general families of distributions issued from the Pearson’s system. Precisely, we propose a method called Pearson mixture model (PMM), used in conjunction with Gaussian copula, which is dynamically updated to adapt itself to the appearance change of the object during the sequence. This model is combined with Kalman filtering to predict the position of the object in the next frame. Experimental results on gray-level and color video sequences show tracking improvements compared to classical GMM. Especially, the PMM seems robust to illumination variations, pose and scale changes, and also to partial occlusions, but its computing time is higher than the computing time of GMM. 相似文献
12.
GMM在目标检测过程中容易受到灯光、目标颜色与背景颜色相似、目标阴影和拍摄高度等因素的干扰。针对以上问题,本文提出一种结合改进HED网络和OTSU双阈值分割的GMM算法。首先,改进模型针对视频帧的背景、噪声、前景目标进行双阈值分割,合理选取高斯模型个数。其次,利用HED网络对输入图片进行边缘检测,将HED网络检测的边缘结果和双阈值分割的GMM检测结果进行“与”运算,得到最终目标检测结果。通过实验验证,改进算法的检测率更高,目标较小时检测轮廓更加完整,检测效果更好。 相似文献
13.
Jun-fang Song 《International journal of parallel programming》2018,46(5):859-872
Traffic surveillance is an important issue in intelligent transportation systems. Efficient and accurate vehicle detection is one challenging problem for complex urban traffic surveillance. As such, this paper proposes a new vehicle detection method using spatial relationship GMM for daytime and nighttime based on a high-resolution camera. First, the vehicle is treated as an object composed of multiple components, including the license plate, rear lamps and headlights. These components are localized using their distinctive color, texture, and region feature. Deriving plate color converting model, plate hypothesis score calculation and cascade plate refining were accomplished for license plate localization. Multi-threshold segmentation and connected component analysis are accomplished for rear lamps localization. Frame difference and geometric features similarity analysis are accomplished for headlights localization. After that, the detected components are taken to construct the spatial relationship using GMM. Finally, similar probability measures of the model and the GMM, including GMM of plate and rear lamp, GMM of both rear lamps and GMM of both headlights are adopted to localize vehicle. Experiments in practical urban scenarios are carried out under daytime and nighttime. It can be shown that our method can adapt to the partial occlusion and various lighting conditions well, meanwhile it has a fast detection speed. 相似文献
14.
Oliver Wang James Davis Erika Chuang Ian Rickard Krystle de Mesa Chirag Dave 《Computer Graphics Forum》2008,27(2):271-279
Inappropriate lighting is often responsible for poor quality video. In most offices and homes, lighting is not designed for video conferencing. This can result in unevenly lit faces, distracting shadows, and unnatural colors. We present a method for relighting faces that reduces the effects of uneven lighting and color. Our setup consists of a compact lighting rig and a camera that is both inexpensive and inconspicuous to the user. We use unperceivable infrared (IR) lights to obtain an illumination bases of the scene. Our algorithm computes an optimally weighted combination of IR bases to minimize lighting inconsistencies in foreground areas and reduce the effects of colored monitor light. However, IR relighting alone results in images with an unnatural ghostly appearance, thus a retargeting technique is presented which removes the unnatural IR effects and produces videos that have substantially more balanced intensity and color than the original video. 相似文献
15.
Fast occluded object tracking by a robust appearance filter 总被引:10,自引:0,他引:10
Nguyen HT Smeulders AW 《IEEE transactions on pattern analysis and machine intelligence》2004,26(8):1099-1104
We propose a new method for object tracking in image sequences using template matching. To update the template, appearance features are smoothed temporally by robust Kalman filters, one to each pixel. The resistance of the resulting template to partial occlusions enables the accurate detection and handling of more severe occlusions. Abrupt changes of lighting conditions can also be handled, especially when photometric invariant color features are used, The method has only a few parameters and is computationally fast enough to track objects in real time. 相似文献
16.
Janne S. Laine 《Journal of the Society for Information Display》2003,11(2):359-369
Abstract— The perceived colors of an image seen on a self‐luminous display are affected by ambient illumination. The ambient light reflected from the display faceplate is mixed with the image‐forming light emitted by the display. In addition to this direct physical effect of viewing flare, ambient illumination causes perceptual changes by affecting the adaptation state of the viewer's visual system. This paper first discusses these effects and how they can be compensated, outlining a display system able to adjust its output based on prevailing lighting conditions. The emphasis is on compensating for the perceptual effects of viewing conditions by means of color‐appearance modeling. The effects of varying the degree of chromatic adaptation parameter D and the surround compensation parameters c and Nc of the CIECAM97s color‐appearance model were studied in psychophysical experiments. In these memory‐based paired comparison experiments, the observers judged the appearance of images shown on an LCD under three different ambient‐illumination conditions. The dependence of the optimal parameter values on the level of ambient illumination was evident. The results of the final experiment, using a category scaling technique, showed the benefit of using the color‐appearance model with the optimized parameters in compensating for the perceptual changes caused by varying ambient illumination. 相似文献
17.
Mitch Bryson Matthew Johnson‐Roberson Oscar Pizarro Stefan B. Williams 《野外机器人技术杂志》2016,33(6):853-874
This paper presents an automated approach to recovering the true color of objects on the seafloor in images collected from multiple perspectives by an autonomous underwater vehicle (AUV) during the construction of three‐dimensional (3D) seafloor models and image mosaics. When capturing images underwater, the water column induces several effects on light that are typically negligible in air, such as color‐dependent attenuation and backscatter. AUVs must typically carry artificial lighting when operating at depths below 20‐30 m; the lighting pattern generated is usually not spatially consistent. These effects cause problems for human interpretation of images, limit the ability of using color to identify benthic biota or quantify changes over multiple dives, and confound computer‐based techniques for clustering and classification. Our approach exploits the 3D structure of the scene generated using structure‐from‐motion and photogrammetry techniques to provide basic spatial data to an underwater image formation model. Parameters that are dependent on the properties of the water column are estimated from the image data itself, rather than using fixed in situ infrastructure, such as reflectance panels or detailed data on water constitutes. The model accounts for distance‐based attenuation and backscatter, camera vignetting and the artificial lighting pattern, recovering measurements of the true color (reflectance) and thus allows us to approximate the appearance of the scene as if imaged in air and illuminated from above. Our method is validated against known color targets using imagery collected in different underwater environments by two AUVs that are routinely used as part of a benthic habitat monitoring program. 相似文献
18.
The appearance of an object greatly changes under different lighting conditions. Even so, previous studies have demonstrated
that the appearance of an object under varying illumination conditions can be represented by a linear subspace. A set of basis
images spanning such a linear subspace can be obtained by applying the principal component analysis (PCA) for a large number
of images taken under different lighting conditions. Since little is known about how to sample the appearance of an object
in order to correctly obtain its basis images, it was a common practice to use as many input images as possible. In this study,
we present a novel method for analytically obtaining a set of basis images of an object for varying illumination from input
images of the object taken properly under a set of light sources, such as point light sources or extended light sources. Our
proposed method incorporates the sampling theorem of spherical harmonics for determining a set of lighting directions to efficiently
sample the appearance of an object. We further consider the issue of aliasing caused by insufficient sampling of the object's
appearance. In particular, we investigate the effectiveness of using extended light sources for modeling the appearance of
an object under varying illumination without suffering the aliasing caused by insufficient sampling of its appearance. 相似文献
19.
Image Appearance Exploration by Model-Based Navigation 总被引:1,自引:0,他引:1
Changing the appearance of an image can be a complex and non-intuitive task. Many times the target image colors and look are only known vaguely and many trials are needed to reach the desired results. Moreover, the effect of a specific change on an image is difficult to envision, since one must take into account spatial image considerations along with the color constraints. Tools provided today by image processing applications can become highly technical and non-intuitive including various gauges and knobs.
In this paper we introduce a method for changing image appearance by navigation, focusing on recoloring images. The user visually navigates a high dimensional space of possible color manipulations of an image. He can either explore in it for inspiration or refine his choices by navigating into sub regions of this space to a specific goal. This navigation is enabled by modeling the chroma channels of an image's colors using a Gaussian Mixture Model (GMM). The Gaussians model both color and spatial image coordinates, and provide a high dimensional parameterization space of a rich variety of color manipulations. The user's actions are translated into transformations of the parameters of the model, which recolor the image. This approach provides both inspiration and intuitive navigation in the complex space of image color manipulations. 相似文献
In this paper we introduce a method for changing image appearance by navigation, focusing on recoloring images. The user visually navigates a high dimensional space of possible color manipulations of an image. He can either explore in it for inspiration or refine his choices by navigating into sub regions of this space to a specific goal. This navigation is enabled by modeling the chroma channels of an image's colors using a Gaussian Mixture Model (GMM). The Gaussians model both color and spatial image coordinates, and provide a high dimensional parameterization space of a rich variety of color manipulations. The user's actions are translated into transformations of the parameters of the model, which recolor the image. This approach provides both inspiration and intuitive navigation in the complex space of image color manipulations. 相似文献