首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Fast and accurate moving object segmentation in dynamic scenes is the first step in many computer vision applications. In this paper, we propose a new background modeling method for moving object segmentation based on dynamic matrix and spatio-temporal analyses of scenes. Our method copes with some challenges related to this field. A new algorithm is proposed to detect and remove cast shadow. A comparative study by quantitative evaluations shows that the proposed approach can detect foreground robustly and accurately from videos recorded by a static camera and which include several constraints. A Highway Control and Management System called RoadGuard is proposed to show the robustness of our method. In fact, our system has the ability to control highway by detecting strange events that can happen like vehicles suddenly stopped in roads, parked vehicles in emergency zones or even illegal conduct such as going out from the road. Moreover, RoadGuard is capable of managing highways by saving information about the date and time of overloaded roads.  相似文献   

2.
3.
Bayesian modeling of dynamic scenes for object detection   总被引:11,自引:0,他引:11  
Accurate detection of moving objects is an important precursor to stable tracking or recognition. In this paper, we present an object detection scheme that has three innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic backgrounds. By using a nonparametric density estimation method over a joint domain-range representation of image pixels, multimodal spatial uncertainties and complex dependencies between the domain (location) and range (color) are directly modeled. We propose a model of the background as a single probability density. Second, temporal persistence is proposed as a detection criterion. Unlike previous approaches to object detection which detect objects by building adaptive models of the background, the foregrounds modeled to augment the detection of objects (without explicit tracking) since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the proposed method is performed and presented on a diverse set of dynamic scenes.  相似文献   

4.
Background modeling and subtraction are core components in video processing. To this end, one aims to recover and continuously update a representation of the scene that is compared with the current input to perform subtraction. Most of the existing methods treat each pixel independently and attempt to model the background perturbation through statistical modeling such as a mixture of Gaussians. While such methods have satisfactory performance in many scenarios, they do not model the relationships and correlation amongst nearby pixels. Such correlation between pixels exists both in space and across time especially when the scene consists of image structures moving across space. Waving trees, beach, escalators and natural scenes with rain or snow are examples of such scenes. In this paper, we propose a method for differentiating between image structures and motion that are persistent and repeated from those that are “new”. Towards capturing the appearance characteristics of such scenes, we propose the use of an appropriate subspace created from image structures. Furthermore, the dynamical characteristics are captured by the use of a prediction mechanism in such subspace. Since the model must adapt to long-term changes in the background, an incremental method for fast online adaptation of the model parameters is proposed. Given such adaptive models, robust and meaningful measures for detection that consider both structural and motion changes are considered. Promising experimental results that include qualitative and quantitative comparisons with existing background modeling/subtraction techniques demonstrate the very promising performance of the proposed framework when dealing with complex backgrounds.  相似文献   

5.
The maintenance of relevant backgrounds under various scene changes is very crucial to detect foregrounds robustly. We propose a background maintenance method for dynamic scenes including global intensity level changes caused by changes of illumination conditions and camera settings. If the global level of the intensity changes abruptly, the conventional background models cannot discriminate true foreground pixels from the background. The proposed method adaptively modifies the background model by estimating the level changes. Because there are changes caused by moving objects as well as global intensity level changes, we estimate the dominant level change over the whole image regions by mean shift. Then, the problem caused by saturated pixels are handled by an additional scheme. In the experiments for dynamic scenes, our proposed method outperforms previous methods by adaptive background maintenance and handling of saturated pixels.  相似文献   

6.
7.
This paper explores the manipulation of time in video editing, enabling to control the chronological time of events. These time manipulations include slowing down (or postponing) some dynamic events while speeding up (or advancing) others. When a video camera scans a scene, aligning all the events to a single time interval will result in a panoramic movie. Time manipulations are obtained by first constructing an aligned space-time volume from the input video, and then sweeping a continuous 2D slice (time front) through that volume, generating a new sequence of images. For dynamic scenes, aligning the input video frames poses an important challenge. We propose to align dynamic scenes using a new notion of "dynamics constancy", which is more appropriate for this task than the traditional assumption of "brightness constancy".Another challenge is to avoid visual seams inside moving objects and other visual artifacts resulting from sweeping the space-time volumes with time fronts of arbitrary geometry. To avoid such artifacts, we formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods.  相似文献   

8.
A motion segmentation algorithm is introduced. The algorithm is based on the assumption of one coherent moving area (without holes) on a static background. It does coarse-to-fine pyramid-based boundary refinement that attempts to classify the blocks into three classes: inside, border, and outside  相似文献   

9.
10.
Yan  Qingsen  Zhu  Yu  Zhang  Yanning 《Multimedia Tools and Applications》2019,78(9):11487-11505

The irradiance range of the real-world scene is often beyond the capability of digital cameras. Therefore, High Dynamic Range (HDR) images can be generated by fusing images with different exposure of the same scene. However, moving objects pose the most severe problem in the HDR imaging, leading to the annoying ghost artifacts in the fused image. In this paper, we present a novel HDR technique to address the moving objects problem. Since the input low dynamic range (LDR) images captured by a camera act as static linear related backgrounds with moving objects during each individual exposures, we formulate the detection of foreground moving objects as a rank minimization problem. Meanwhile, in order to eliminate the image blurring caused by background slightly change of LDR images, we further rectify the background by employing the irradiances alignment. Experiments on image sequences show that the proposed algorithm performs significant gains in synthesized HDR image quality compare to state-of-the-art methods.

  相似文献   

11.
Real-time modeling and rendering of raining scenes   总被引:1,自引:0,他引:1  
Real-time modeling and rendering of a realistic raining scene is a challenging task. This is because the visual effects of raining involve complex physical mechanisms, reflecting the physical, optical and statistical characteristics of raindrops, etc. In this paper, we propose a set of new methods to model the raining scene according to these physical mechanisms. Firstly, by adhering to the physical characteristic of raindrops, we model the shapes, movements and intensity of raindrops in different situations. Then, based on the principle of human vision persistence, we develop a new model to calculate the shapes and appearances of rain streaks. To render the foggy effect in a raining scene, we present a statistically based multi-particles scattering model exploiting the particle distribution coherence along each viewing ray. By decomposing the conventional equations of single scattering of non-isotropic light into two parts with the physical parameter independent part precalculated, we are able to render the respective scattering effect in real time. We also realize diffraction of lamps, wet ground, the ripples on puddles in the raining scene, as well as the beautiful rainbow. By incorporating GPU acceleration, our approach permits real-time walkthrough of various raining scenes with average 20 fps rendering speed and the results are quite satisfactory.  相似文献   

12.
In this paper we propose a system that involves a Background Subtraction, BS, model implemented in a neural Self Organized Map with a Fuzzy Automatic Threshold Update that is robust to illumination changes and slight shadow problems. The system incorporates a scene analysis scheme to automatically update the Learning Rates values of the BS model considering three possible scene situations. In order to improve the identification of dynamic objects, an Optical Flow algorithm analyzes the dynamic regions detected by the BS model, whose identification was not complete because of camouflage issues, and it defines the complete object based on similar velocities and direction probabilities. These regions are then used as the input needed by a Matte algorithm that will improve the definition of the dynamic object by minimizing a cost function. Among the original contributions of this work are; an adapting fuzzy-neural segmentation model whose thresholds and learning rates are adapted automatically according to the changes in the video sequence and the automatic improvement on the segmentation results based on the Matte algorithm and Optical flow analysis. Findings demonstrate that the proposed system produces a competitive performance compared with state-of-the-art reported models by using BMC and Li databases.  相似文献   

13.
In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space–time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effects.  相似文献   

14.
文章首先总结了一些背景的建模方法,并对这些方法进行了分类和比较,接着总结和比较了背景模型保持的方法,最后介绍了背景模型保持的一些原则。  相似文献   

15.

We describe an artificial high-level vision system for the symbolic interpretation of data coming from a video camera that acquires the image sequences of moving scenes. The system is based on ARSOM neural networks that learn to generate the perception-grounded predicates obtained by image sequences. The ARSOM neural networks also provide a three-dimensional estimation of the movements of the relevant objects in the scene. The vision system has been employed in two scenarios: the monitoring of a robotic arm suitable for space operations, and the surveillance of an electronic data processing (EDP) center.  相似文献   

16.
Digital landscape realism often comes from the multitude of details that are hard to model such as fallen leaves, rock piles or entangled fallen branches. In this article, we present a method for augmenting natural scenes with a huge amount of details such as grass tufts, stones, leaves or twigs. Our approach takes advantage of the observation that those details can be approximated by replications of a few similar objects and therefore relies on mass‐instancing. We propose an original structure, the Ghost Tile, that stores a huge number of overlapping candidate objects in a tile, along with a pre‐computed collision graph. Details are created by traversing the scene with the Ghost Tile and generating instances according to user‐defined density fields that allow to sculpt layers and piles of entangled objects while providing control over their density and distribution.  相似文献   

17.
粒子滤波方法是一种针对非刚性目标运动跟踪的有效工具。运用基于贝叶斯估计的粒子滤波算法,对复杂的运动背景下目标移动进行跟踪。论述了贝叶斯估计理论,推导粒子滤波过程,并将状态粒子决定的区域所对应的色彩直方图用作测量,与目标参考直方图相比较,得出最佳的后验估计。运用窗口粒子平均方法确定目标的坐标,实现跟踪。算法采用单目标以及多目标序列图象进行跟踪实验,并与均值移动(mean-shift)跟踪算法结果进行比较,证明该跟踪算法更为有效。  相似文献   

18.
19.
High Dynamic Range (HDR) imaging requires one to composite multiple, differently exposed images of a scene in the irradiance domain and perform tone mapping of the generated HDR image for displaying on Low Dynamic Range (LDR) devices. In the case of dynamic scenes, standard techniques may introduce artifacts called ghosts if the scene changes are not accounted for. In this paper, we consider the blind HDR problem for dynamic scenes. We develop a novel bottom-up segmentation algorithm through superpixel grouping which enables us to detect scene changes. We then employ a piecewise patch-based compositing methodology in the gradient domain to directly generate the ghost-free LDR image of the dynamic scene. Being a blind method, the primary advantage of our approach is that we do not assume any knowledge of camera response function and exposure settings while preserving the contrast even in the non-stationary regions of the scene. We compare the results of our approach for both static and dynamic scenes with that of the state-of-the-art techniques.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号