首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Retinal image motion and optical flow as its approximation are fundamental concepts in the field of vision, perceptual and computational. However, the computation of optical flow remains a challenging problem as image motion includes discontinuities and multiple values mostly due to scene geometry, surface translucency and various photometric effects such as reflectance. In this contribution, we analyze image motion in the frequency space with respect to motion discontinuities and translucence. We derive the frequency structure of motion discontinuities due to occlusion and we demonstrate its various geometrical properties. The aperture problem is investigated and we show that the information content of an occlusion almost always disambiguates the velocity of an occluding signal suffering from the aperture problem. In addition, the theoretical framework can describe the exact frequency structure of Non-Fourier motion and bridges the gap between Non-Fourier visual phenomena and their understanding in the frequency domain.  相似文献   

2.
The classic approach to structure from motion entails a clear separation between motion estimation and structure estimation and between two-dimensional (2D) and three-dimensional (3D) information. For the recovery of the rigid transformation between different views only 2D image measurements are used. To have available enough information, most existing techniques are based on the intermediate computation of optical flow which, however, poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) accurately estimate flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. This paper introduces a novel approach to structure from motion which addresses the processes of smoothing, 3D motion and structure estimation in a synergistic manner. It provides an algorithm for estimating the transformation between two views obtained by either a calibrated or uncalibrated camera. The results of the estimation are then utilized to perform a reconstruction of the scene from a short sequence of images.The technique is based on constraints on image derivatives which involve the 3D motion and shape of the scene, leading to a geometric and statistical estimation problem. The interaction between 3D motion and shape allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that vary locally more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the least varying estimated depth as well as the image locations of scene discontinuities. We analyze the new constraints, show their relationship to the minimization of the epipolar constraint, and present experimental results using real image sequences that indicate the robustness of the method.  相似文献   

3.
Two novel systems computing dense three-dimensional (3-D) scene flow and structure from multiview image sequences are described in this paper. We do not assume rigidity of the scene motion, thus allowing for nonrigid motion in the scene. The first system, integrated model-based system (IMS), assumes that each small local image region is undergoing 3-D affine motion. Non-linear motion model fitting based on both optical flow constraints and stereo constraints is then carried out on each local region in order to simultaneously estimate 3-D motion correspondences and structure. The second system is based on extended gradient-based system (EGS), a natural extension of two-dimensional (2-D) optical flow computation. In this method, a new hierarchical rule-based stereo matching algorithm is first developed to estimate the initial disparity map. Different available constraints under a multiview camera setup are further investigated and utilized in the proposed motion estimation. We use image segmentation information to adopt and maintain the motion and depth discontinuities. Within the framework for EGS, we present two different formulations for 3-D scene flow and structure computation. One formulation assumes that initial disparity map is accurate, while the other does not. Experimental results on both synthetic and real imagery demonstrate the effectiveness of our 3-D motion and structure recovery schemes. Empirical comparison between IMS and EGS is also reported.  相似文献   

4.
We present an algorithm for identifying and tracking independently moving rigid objects from optical flow. Some previous attempts at segmentation via optical flow have focused on finding discontinuities in the flow field. While discontinuities do indicate a change in scene depth, they do not in general signal a boundary between two separate objects. The proposed method uses the fact that each independently moving object has a unique epipolar constraint associated with its motion. Thus motion discontinuities based on self-occlusion can be distinguished from those due to separate objects. The use of epipolar geometry allows for the determination of individual motion parameters for each object as well as the recovery of relative depth for each point on the object. The algorithm assumes an affine camera where perspective effects are limited to changes in overall scale. No camera calibration parameters are required. A Kalman filter based approach is used for tracking motion parameters with time  相似文献   

5.
In the general structure-from-motion (SFM) problem involving several moving objects in a scene, the essential first step is to segment moving objects independently. We attempt to deal with the problem of optical flow estimation and motion segmentation over a pair of images. We apply a mean field technique to determine optical flow and motion boundaries and present a deterministic algorithm. Since motion discontinuities represented by line process are embedded in the estimation of the optical flow, our algorithm provides accurate estimates of optical flow especially along motion boundaries and handles occlusion and multiple motions. We show that the proposed algorithm outperforms other well-known algorithms in terms of estimation accuracy and timing.  相似文献   

6.
Scene segmentation from visual motion using global optimization   总被引:13,自引:0,他引:13  
This paper presents results from computer experiments with an algorithm to perform scene disposition and motion segmentation from visual motion or optic flow. The maximum a posteriori (MAP) criterion is used to formulate what the best segmentation or interpretation of the scene should be, where the scene is assumed to be made up of some fixed number of moving planar surface patches. The Bayesian approach requires, first, specification of prior expectations for the optic flow field, which here is modeled as spatial and temporal Markov random fields; and, secondly, a way of measuring how well the segmentation predicts the measured flow field. The Markov random fields incorporate the physical constraints that objects and their images are probably spatially continuous, and that their images are likely to move quite smoothly across the image plane. To compute the flow predicted by the segmentation, a recent method for reconstructing the motion and orientation of planar surface facets is used. The search for the globally optimal segmentation is performed using simulated annealing.  相似文献   

7.
A new image segmentation algorithm is presented, based on recursive Bayes smoothing of images modeled by Markov random fields and corrupted by independent additive noise. The Bayes smoothing algorithm yields the a posteriori distribution of the scene value at each pixel, given the total noisy image, in a recursive way. The a posteriori distribution together with a criterion of optimality then determine a Bayes estimate of the scene. The algorithm presented is an extension of a 1-D Bayes smoothing algorithm to 2-D and it gives the optimum Bayes estimate for the scene value at each pixel. Computational concerns in 2-D, however, necessitate certain simplifying assumptions on the model and approximations on the implementation of the algorithm. In particular, the scene (noiseless image) is modeled as a Markov mesh random field, a special class of Markov random fields, and the Bayes smoothing algorithm is applied on overlapping strips (horizontal/vertical) of the image consisting of several rows (columns). It is assumed that the signal (scene values) vector sequence along the strip is a vector Markov chain. Since signal correlation in one of the dimensions is not fully used along the edges of the strip, estimates are generated only along the middle sections of the strips. The overlapping strips are chosen such that the union of the middle sections of the strips gives the whole image. The Bayes smoothing algorithm presented here is valid for scene random fields consisting of multilevel (discrete) or continuous random variables.  相似文献   

8.
Motion field and optical flow: qualitative properties   总被引:7,自引:0,他引:7  
It is shown that the motion field the 2-D vector field which is the perspective projection on the image plane of the 3-D velocity field of a moving scene, and the optical flow, defined as the estimate of the motion field which can be derived from the first-order variation of the image brightness pattern, are in general different, unless special conditions are satisfied. Therefore, dense optical flow is often ill-suited for computing structure from motion and for reconstructing the 3-D velocity field by algorithms which require a locally accurate estimate of the motion field. A different use of the optical flow is suggested. It is shown that the (smoothed) optical flow and the motion field can be interpreted as vector fields tangent to flows of planar dynamical systems. Stable qualitative properties of the motion field, which give useful informations about the 3-D velocity field and the 3-D structure of the scene, usually can be obtained from the optical flow. The idea is supported by results from the theory of structural stability of dynamical systems  相似文献   

9.
Multiple constraints to compute optical flow   总被引:1,自引:0,他引:1  
The computation of the optical flow field from an image sequence requires the definition of constraints on the temporal change of image features. In this paper, we consider the implications of using multiple constraints in the computational schema. In the first step, it is shown that differential constraints correspond to an implicit feature tracking. Therefore, the best results (either in terms of measurement accuracy, and speed in the computation) are obtained by selecting and applying the constraints which are best “tuned” to the particular image feature under consideration. Considering also multiple image points not only allows us to obtain a (locally) better estimate of the velocity field, but also to detect erroneous measurements due to discontinuities in the velocity field. Moreover, by hypothesizing a constant acceleration motion model, also the derivatives of the optical flow are computed. Several experiments are presented from real image sequences  相似文献   

10.
The main aim of this paper is to propose a new neural algorithm to perform a segmentation of an observed scene in regions corresponding to different moving objects, by analysing a time-varying image sequence. The method consists of a classification step, where the motion of small patches is recovered through an optimisation approach, and a segmen-tation step merging neighbouring patches characterised by the same motion. Classification of motion is performed without optical flow computation. Three-dimensional motion parameter estimates are obtained directly from the spatial and temporal image gradients by minimising an appropriate energy function with a Hopfield-like neural network. Network convergence is accelerated by integrating the quantitative estimation of the motion parameters with a qualitative estimate of dominant motion using the geometric theory of differential equations.  相似文献   

11.
This paper proposes a new neural algorithm to perform the segmentation of an observed scene into regions corresponding to different moving objects by analyzing a time-varying images sequence.The method consists of a classification step,where the motion of small patches is characterized through an optimization approach,and a segmentation step merging meighboring patches characterized by the same motion.Classification of motion is performed without optical flow computation,but considering only the spatial and temporal image gradients into an appropriate energy function minimized with a Hopfield-like neural network giving as output directly the 3D motion parameter estimates.Network convergence is accelerated by integrating the quantitative estimation of motion parameters with a qualitative estimate of dominant motion using the geometric theory of differential equations.  相似文献   

12.
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.  相似文献   

13.
This paper deals with the estimation of the time-to-contact in dynamic vision. It is well known that differential invariants of the image velocity field can be used to characterize the shape changes of objects in the scene, due to relative motion between the observer and the scene. Under the hypothesis of constant velocity along the optical axis, the time-to-contact turns out to be a function of the area enclosed by the object contour and its time derivative.In the paper, a novel approach based on set membership estimation theory is proposed to estimate the variables involved in the computation of the time-to-contact. Both errors in the motion model and image measurement noise are described as unknown-but-bounded disturbances, without requiring any statistical assumption. The proposed technique allows for the computation of guaranteed bounds on the time-to-contact estimates in finite time, a crucial issue in all problems where a robust evaluation of the time-to-contact is in order.  相似文献   

14.
Three-dimensional scene flow   总被引:2,自引:0,他引:2  
Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that the same fundamental limitation applies to scene flow; however, many cameras are used to image the scene. There are then two choices when computing scene flow: 1) perform the regularization in the images or 2) perform the regularization on the surface of the object in the scene. In this paper, we choose to compute scene flow using regularization in the images. We describe three algorithms, the first two for computing scene flow from optical flows and the third for constraining scene structure from the inconsistencies in multiple optical flows.  相似文献   

15.
In this paper, we present a novel video stabilization method with a pixel-wise motion model. In order to avoid distortion introduced by traditional feature points based motion models, we focus on constructing a more accurate model to capture the motion in videos. By taking advantage of dense optical flow, we can obtain the dense motion field between adjacent frames and set up a pixel-wise motion model which is accurate enough. Our method first estimates dense motion field between adjacent frames. A PatchMatch based dense motion field estimation algorithm is proposed. This algorithm is specially designed for similar video frames rather than arbitrary images to reach higher speed and better performance. Then, a simple and fast smoothing algorithm is performed to make the jittered motion stabilized. After that, we warp input frames using a weighted average algorithm to construct the output frames. Some pixels in output frames may be still empty after the warping step, so in the last step, these empty pixels are filled using a patch based image completion algorithm. We test our method on many challenging videos and demonstrate the accuracy of our model and the effectiveness of our method.  相似文献   

16.
目的 图像分割是计算机视觉、数字图像处理等应用领域首要解决的关键问题。针对现有的单幅图像物体分割算法广泛存在的过分割和过合并现象,提出基于图像T型节点线索的图像物体分割算法。方法 首先,利用L0梯度最小化方法平滑目标图像,剔除细小纹理的干扰;其次,基于Graph-based分割算法对平滑后图像进行适度分割,得到粗糙分割结果;最后,借助于图像中广泛存在的T型节点线索对初始分割块进行区域合并得到最终优化分割结果。结果 将本文算法分别与Grabcut算法及Graph-based算法在不同场景类型下进行了实验与对比。实验结果显示,Grabcut算法需要人工定位边界且一次只能分割单个物体,Graph-based算法综合类内相似度和类间差异性,可以有效保持图像边界,但无法有效控制分割块数量,且分割结果对阈值参数过分依赖,极易导致过分割和过合并现象。本文方法在降低过分割和过合并现象、边界定位精确性和分割准确率方面获得明显改进,几组不同类型的图片分割准确率平均值达到91.16%,明显由于其他算法。处理图像尺寸800×600像素的图像平均耗时3.5 s,较之其他算法略有增加。结论 与各种算法对比结果表明,该算法可有效解决过分割和过合并问题,对比实验结果验证了该方法的有效性,能够取得具有一定语义的图像物体分割结果。  相似文献   

17.
A fast scalable algorithm for discontinuous optical flow estimation   总被引:4,自引:0,他引:4  
Multiple moving objects, partially occluded objects, or even a single object moving against the background gives rise to discontinuities in the optical flow field in corresponding image sequences. While uniform global regularization based moderately fast techniques cannot provide accurate estimates of the discontinuous flow field, statistical optimization based accurate techniques suffer from excessive solution time. A `weighted anisotropic' smoothness based numerically robust algorithm is proposed that can generate discontinuous optical flow field with high speed and linear computational complexity. Weighted sum of the first-order spatial derivatives of the flow field is used for regularization. Less regularization is performed where strong gradient information is available. The flow field at any point is interpolated more from those at neighboring points along the weaker intensity gradient component. Such intensity gradient weighted regularization leads to Euler-Lagrange equations with strong anisotropies coupled with discontinuities in their coefficients. A robust multilevel iterative technique, that recursively generates coarse-level problems based on intensity gradient weighted smoothing weights, is employed to estimate discontinuous optical flow field. Experimental results are presented to demonstrate the efficacy of the proposed technique  相似文献   

18.
Motion segmentation in moving camera videos is a very challenging task because of the motion dependence between the camera and moving objects. Camera motion compensation is recognized as an effective approach. However, existing work depends on prior-knowledge on the camera motion and scene structure for model selection. This is not always available in practice. Moreover, the image plane motion suffers from depth variations, which leads to depth-dependent motion segmentation in 3D scenes. To solve these problems, this paper develops a prior-free dependent motion segmentation algorithm by introducing a modified Helmholtz-Hodge decomposition (HHD) based object-motion oriented map (OOM). By decomposing the image motion (optical flow) into a curl-free and a divergence-free component, all kinds of camera-induced image motions can be represented by these two components in an invariant way. HHD identifies the camera-induced image motion as one segment irrespective of depth variations with the help of OOM. To segment object motions from the scene, we deploy a novel spatio-temporal constrained quadtree labeling. Extensive experimental results on benchmarks demonstrate that our method improves the performance of the state-of-the-art by 10%~20% even over challenging scenes with complex background.  相似文献   

19.
Changes in successive images from a time-varying image sequence of a scene can be characterized by velocity vector fields. The estimate of the velocity vector field is determined as a compromise between optical flow and directional smoothness constraints. The optical flow constraints relate the values of the time-varying image function at the corresponding points of the successive images of the sequence. The directional smoothness constraints relate the values of neighboring velocity vectors. To achieve the compromise, we introduce a system of nonlinear equations of the unknown estimate of the velocity vector field using a novel variational principle applied to the weighted average of the optical flow and the directional smoothness constraints. A stable iterative method for solving this system is developed. The optical flow and the directional smoothness constraints are selectively suppressed in the neighborhoods of the occluding boundaries by implicitly adjusting their weights. These adjustments are based on the spatial variations of the estimates of the velocity vectors and the spatial variations of the time-varying image function. The system of nonlinear equations is defined in terms of the time-varying image function and its derivatives. The initial image functions are in general discontinuous and cannot be directly differentiated. These difficulties are overcome by treating the initial image functions as generalized functions and their derivatives as generalized derivatives. These generalized functions are evaluated (observed) on the parametric family of testing (smoothing) functions to obtain parametric families of secondary images, which are used in the system of nonlinear equations. The parameter specifies the degree of smoothness of each secondary image. The secondary images with progressively higher degrees of smoothness are sampled with progressively lower resolutions. Then coarse-to-fine control strategies are used to obtain the estimate.  相似文献   

20.
The optical flow method is an approach to motion analysis in the field of computer vision. This method is implemented here to deal with rotational and deformational ice motion, for which the area correlation method shows deficiencies. The results show that the optical flow method has the capacity to cope with the rotation and deformation of an ice cover, while requiring less computing time than the area correlation method. For better representation of the discontinuities of a motion field, a modified version of the optical flow method is presented with the aid of image segmentation. The paper also includes a technique for detecting mean rotation and translation in terms of FFT transformation. In most cases, this technique can simplify and speed up the process of motion retrieval in the Arctic central pack ice area. The algorithm has been applied to fourteen pairs of images acquired in the Arctic Ocean and the Baltic Sea. Three of them are illustrated here to demonstrate the accuracy and other capacities of the algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号