首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Traditional optical flow algorithms assume local image translational motion and apply simple image filtering techniques. Recent studies have taken two separate approaches toward improving the accuracy of computed flow: the application of spatio-temporal filtering schemes and the use of advanced motion models such as the affine model. Each has achieved some improvement over traditional algorithms in specialized situations but the computation of accurate optical flow for general motion has been elusive. In this paper, we exploit the interdependency between these two approaches and propose a unified approach. The general motion model we adopt characterizes arbitrary 3-D steady motion. Under perspective projection, we derive an image motion equation that describes the spatio-temporal relation of gray-scale intensity in an image sequence, thus making the utilization of 3-D filtering possible. However, to accommodate this motion model, we need to extend the filter design to derive additional motion constraint equations. Using Hermite polynomials, we design differentiation filters, whose orthogonality and Gaussian derivative properties insure numerical stability; a recursive relation facilitates application of the general nonlinear motion model while separability promotes efficiency. The resulting algorithm produces accurate optical flow and other useful motion parameters. It is evaluated quantitatively using the scheme established by Barron et al. (1994) and qualitatively with real images.  相似文献   

2.
In an infrared surveillance system (which must detect remote sources and thus has a very low resolution) in an aerospace environment, the estimation of the cloudy sky velocity should lower the false alarm rate in discriminating the motion between various moving shapes by means of a background velocity map. The optical flow constraint equation, based on a Taylor expansion of the intensity function, is often used to estimate the motion for each pixel. One of the main problems in motion estimation is that, for one pixel, the real velocity cannot be found because of the aperture problem. Another kinematic estimation method is based on a matched filter [generalized Hough transform (GHT)]: it gives a global velocity estimation for a set of pixels. On the one hand we obtain a local velocity estimation for each pixel with little credibility because the optical flow is so sensitivity to noise; on the other hand, we obtain a robust global kinematic estimation, the same for all selected pixels. This paper aims to adapt and improve the GHT in our typical application in which one must discern the global movement of objects (clouds), whatever their form may be (clouds with hazy edges or distorted shapes or even clouds that have very little structure). We propose an improvement of the GHT algorithm by segmentation images with polar constraints on spatial gradients. One pixel, at timet, is matched with another one at timet + T, only if the direction and modulus of the gradient are similar. This technique, which is very efficient, sharpens the peak and improves the motion resolution. Each of these estimations is calculated within windows belonging to the image, these windows being selected by means of an entropy criterion. The kinematic vector is computed accurately by means of the optical flow constraint equation applied on the displaced window. We showed that, for small displacements, the optical flow constraint equation sharpens the results of the GHT. Thus a semi-dense velocity field is obtained for cloud edges. A velocity map computed on real sequences with these methods is shown. In this way, a kinematic parameter discriminates between a target and the cloudy background.  相似文献   

3.
We discuss the computation of the instantaneous 3D displacement vector fields of deformable surfaces from sequences of range data. We give a novel version of the basic motion constraint equation that can be evaluated directly on the sensor grid. The various forms of the aperture problem encountered are investigated and the derived constraint solutions are solved in a total least squares (TLS) framework. We propose a regularization scheme to compute dense full flow fields from the sparse TLS solutions. The performance of the algorithm is analyzed quantitatively for both synthetic and real data. Finally we apply the method to compute the 3D motion field of living plant leaves.  相似文献   

4.
综合图像分割与光流计算的连续处理方法   总被引:3,自引:0,他引:3  
本文提出了利用多通道将图像分割和光流计算相综合的方法,使连续处理方法适用于多物体有遮挡的运动场景.  相似文献   

5.
We present a robust strategy for docking a mobile robot in close proximity with an upright surface using optical flow field divergence and proportional feedback control. Unlike previous approaches, we achieve this without the need for explicit segmentation of features in the image, and using complete gradient-based optical flow estimation (i.e., no affine models) in the optical flow computation. A key contribution is the development of an algorithm to compute the flow field divergence, or time-to-contact, in a manner that is robust to small rotations of the robot during ego-motion. This is done by tracking the focus of expansion of the flow field and using this to compensate for ego rotation of the image. The control law used is a simple proportional feedback, using the unfiltered flow field divergence as an input, for a dynamic vehicle model. Closed-loop stability analysis of docking under the proposed feedback is provided. Performance of the flow field divergence algorithm is demonstrated using offboard natural image sequences, and the performance of the closed-loop system is experimentally demonstrated by control of a mobile robot approaching a wall.   相似文献   

6.
This research addresses the problem of noise sensitivity inherent in motion and structure algorithms. The motion and structure paradigm is a two-step process. First, we measure image velocities and, perhaps, their spatial and temporal derivatives, are obtained from time-varying image intensity data and second, we use these data to compute the motion of a moving monocular observer in a stationary environment under perspective projection, relative to a single 3-D planar surface. The first contribution of this article is an algorithm that uses time-varying image velocity information to compute the observer's translation and rotation and the normalized surface gradient of the 3-D planar surface. The use of time-varying image velocity information is an important tool in obtaining a more robust motion and structure calculation. The second contribution of this article is an extensive error analysis of the motion and structure problem. Any motion and structure algorithm that uses image velocity information as its input should exhibit error sensitivity behavior compatible with the results reported here. We perform an average and worst case error analysis for four types of image velocity information: full and normal image velocities and full and normal sets of image velocity and its derivatives. (These derivatives are simply the coefficients of a truncated Taylor series expansion about some point in space and time.) The main issues we address here are: just how sensitive is a motion and structure computation in the presence of noisy input, or alternately, how accurate must our image velocity information be, how much and what type of input data is needed, and under what circumstances is motion and structure feasible? That is, when can we be sure that a motion and structure computation will produce usable results? We base our answers on a numerical error analysis we conduct for a large number of motions.  相似文献   

7.
This paper describes an approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squares image reconstruction of standard eigenspace techniques has a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a subspace constancy assumption that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define a multi-scale eigenspace representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this EigenTracking technique to track and recognize the gestures of a moving hand.  相似文献   

8.
Accurate optical flow computation under non-uniform brightness variations   总被引:1,自引:0,他引:1  
In this paper, we present a very accurate algorithm for computing optical flow with non-uniform brightness variations. The proposed algorithm is based on a generalized dynamic image model (GDIM) in conjunction with a regularization framework to cope with the problem of non-uniform brightness variations. To alleviate flow constraint errors due to image aliasing and noise, we employ a reweighted least-squares method to suppress unreliable flow constraints, thus leading to robust estimation of optical flow. In addition, a dynamic smoothness adjustment scheme is proposed to efficiently suppress the smoothness constraint in the vicinity of the motion and brightness variation discontinuities, thereby preserving motion boundaries. We also employ a constraint refinement scheme, which aims at reducing the approximation errors in the first-order differential flow equation, to refine the optical flow estimation especially for large image motions. To efficiently minimize the resulting energy function for optical flow computation, we utilize an incomplete Cholesky preconditioned conjugate gradient algorithm to solve the large linear system. Experimental results on some synthetic and real image sequences show that the proposed algorithm compares favorably to most existing techniques reported in literature in terms of accuracy in optical flow computation with 100% density.  相似文献   

9.
Design and Use of Linear Models for Image Motion Analysis   总被引:7,自引:1,他引:6  
Linear parameterized models of optical flow, particularly affine models, have become widespread in image motion analysis. The linear model coefficients are straightforward to estimate, and they provide reliable estimates of the optical flow of smooth surfaces. Here we explore the use of parameterized motion models that represent much more varied and complex motions. Our goals are threefold: to construct linear bases for complex motion phenomena; to estimate the coefficients of these linear models; and to recognize or classify image motions from the estimated coefficients. We consider two broad classes of motions: i) generic motion features such as motion discontinuities and moving bars; and ii) non-rigid, object-specific, motions such as the motion of human mouths. For motion features we construct a basis of steerable flow fields that approximate the motion features. For object-specific motions we construct basis flow fields from example motions using principal component analysis. In both cases, the model coefficients can be estimated directly from spatiotemporal image derivatives with a robust, multi-resolution scheme. Finally, we show how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions.  相似文献   

10.
Reliable and Efficient Computation of Optical Flow   总被引:3,自引:3,他引:3  
In this paper, we present two very efficient and accurate algorithms for computing optical flow. The first is a modified gradient-based regularization method, and the other is an SSD-based regularization method. For the gradient-based method, to amend the errors in the discrete image flow equation caused by numerical differentiation as well as temporal and spatial aliasing in the brightness function, we propose to selectively combine the image flow constraint and a contour-based flow constraint into the data constraint by using a reliability measure. Each data constraint is appropriately normalized to obtain an approximate minimum distance (of the data point to the linear flow equation) constraint instead of the conventional linear flow constraint. These modifications lead to robust and accurate optical flow estimation. We propose an incomplete Cholesky preconditioned conjugate gradient algorithm to solve the resulting large and sparse linear system efficiently. Our SSD-based regularization method uses a normalized SSD measure (based on a similar reasoning as in the gradient-based scheme) as the data constraint in a regularization framework. The nonlinear conjugate gradient algorithm in conjunction with an incomplete Cholesky preconditioning is developed to solve the resulting nonlinear minimization problem. Experimental results on synthetic and real image sequences for these two algorithms are given to demonstrate their performance in comparison with competing methods reported in literature.  相似文献   

11.
We present a new shape from shading algorithm, extending to the single-input case, a recently introduced approach to the photometric motion process. As proposed by Pentland, photometric motion is based on the intensity variation, due to the motion, at a given point on a rotating surface. Recently, an alternative formulation has also appeared, based on the intensity change at a fixed image location. Expressing this as a function of reflectance-map and motion-field parameters, a constraint on the shape of the imaged surface can be obtained. Coupled with an affine matching constraint, this has been shown to yield a closed-form expression for the surface function. Here, we extend such formulation to the single-input case, by using the Green’s function of an affine matching equation to generate an artificial pair to the input image, corresponding to an approximate rendition of the imaged surface under a rotated view. Using this, we are able to obtain high quality shape-from-shading estimates, even under conditions of unknown reflectance map and light source direction, as demonstrated here by an extensive experimental study.  相似文献   

12.
易盟  楚岩 《计算机科学》2016,43(8):313-317
考虑到航拍机载成像平台抖动严重、视频稳像匹配环节精度不一致的特点以及航拍图像稳像技术快速、准确的要求,提出了一种结合仿射不变约束与快速扩展卡尔曼(Extend Kalman Filter,EKF)滤波的图像稳像算法。该算法首先以视频参考帧中的角点量作为特征点,通过Harris检测器选择出稳定角点;然后对待配准点构建Delaunay三角网进行初始匹配,提出利用仿射不变约束方法筛选出精确匹配点;最后利用快速EKF运动滤波方法实时估计和修正噪声的统计特性,从而解决摄像机扫描运动中存在的抖动问题。在对大量分辨率为640×480pixel的航拍图像的仿真实验中,可通过仿射不变约束实现精确的模型估计,采用的快速运动补偿方法在补偿过程中耗时为5.054ms,比传统的运动补偿方法节约了69.5%的时间。实验结果表明,该算法能够实时稳定航拍视频帧间的抖动现象,并能有效跟随场景的真实扫描。  相似文献   

13.
Observability of 3D Motion   总被引:2,自引:2,他引:0  
This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the epipolar constraint, applied to motion fields, and the other the positive depth constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors in the 3D motion parameters at the places representing the minima of the functions. For conventional video cameras possessing a restricted field of view, the analysis shows that for algorithms in both classes which estimate all motion parameters simultaneously, the obtained solution has an error such that the projections of the translational and rotational errors on the image plane are perpendicular to each other. Furthermore, the estimated projection of the translation on the image lies on a line through the origin and the projection of the real translation. The situation is different for a camera with a full (360 degree) field of view (achieved by a panoramic sensor or by a system of conventional cameras). In this case, at the locations of the minima of the above two functions, either the translational or the rotational error becomes zero, while in the case of a restricted field of view both errors are non-zero. Although some ambiguities still remain in the full field of view case, the implication is that visual navigation tasks, such as visual servoing, involving 3D motion estimation are easier to solve by employing panoramic vision. Also, the analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion. Finally, the introduced framework points to new avenues for studying the stability of image-based servoing schemes.  相似文献   

14.
Recursive 3-D Visual Motion Estimation Using Subspace Constraints   总被引:9,自引:4,他引:9  
A structure from motion algorithm is described which recovers structure and camera position, modulo a projective ambiguity. Camera calibration is not required, and camera parameters such as focal length can be altered freely during motion. The structure is updated sequentially over an image sequence, in contrast to schemes which employ a batch process. A specialisation of the algorithm to recover structure and camera position modulo an affine transformation is described, together with a method to periodically update the affine coordinate frame to prevent drift over time. We describe the constraint used to obtain this specialisation.Structure is recovered from image corners detected and matched automatically and reliably in real image sequences. Results are shown for reference objects and indoor environments, and accuracy of recovered structure is fully evaluated and compared for a number of reconstruction schemes. A specific application of the work is demonstrated—affine structure is used to compute free space maps enabling navigation through unstructured environments and avoidance of obstacles. The path planning involves only affine constructions.  相似文献   

15.
We consider the stratified self-calibration (affine and metric reconstruction) problem from images acquired with a camera with unchanging internal parameters undergoing circular motion. The general stratified method (modulus constraints) is known to fail with this motion. In this paper we give a novel constraint on the plane at infinity in projective reconstruction for circular motion, the constant inter-frame motion constraint on the plane at infinity between every two adjacent views and a fixed view of the motion sequences, by making use of the facts that in many commercial systems rotation angles are constant. An initial solution can be obtained by using the first three views of the sequence, and Stratified Iterative Particle Swarm Optimization (SIPSO) is proposed to get an accurate and robust solution when more views are at hand. Instead of using the traditional optimization algorithm as the last step to obtain an accurate solution, in this paper, the whole motion sequence information is exploited before computing the camera calibration matrix, this results in a more accurate and robust solution. Once the plane at infinity is identified, the calibration matrices of the camera and a metric reconstruction can be readily obtained. Experiments on both synthetic and real image sequence are given, showing the accuracy and robustness of the new algorithm.  相似文献   

16.
Shape from texture is best analyzed in two stages, analogous to stereopsis and structure from motion: (a) Computing the texture distortion from the image, and (b) Interpreting the texture distortion to infer the orientation and shape of the surface in the scene. We model the texture distortion for a given point and direction on the image plane as an affine transformation and derive the relationship between the parameters of this transformation and the shape parameters. We have developed a technique for estimating affine transforms between nearby image patches which is based on solving a system of linear constraints derived from a differential analysis. One need not explicitly identify texels or make restrictive assumptions about the nature of the texture such as isotropy. We use non-linear minimization of a least squares error criterion to recover the surface orientation (slant and tilt) and shape (principal curvatures and directions) based on the estimated affine transforms in a number of different directions. A simple linear algorithm based on singular value decomposition of the linear parts of the affine transforms provides the initial guess for the minimization procedure. Experimental results on both planar and curved surfaces under perspective projection demonstrate good estimates for both orientation and shape. A sensitivity analysis yields predictions for both computer vision algorithms and human perception of shape from texture.  相似文献   

17.
We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does this using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the focal flow cue, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.  相似文献   

18.
A fast scalable algorithm for discontinuous optical flow estimation   总被引:4,自引:0,他引:4  
Multiple moving objects, partially occluded objects, or even a single object moving against the background gives rise to discontinuities in the optical flow field in corresponding image sequences. While uniform global regularization based moderately fast techniques cannot provide accurate estimates of the discontinuous flow field, statistical optimization based accurate techniques suffer from excessive solution time. A `weighted anisotropic' smoothness based numerically robust algorithm is proposed that can generate discontinuous optical flow field with high speed and linear computational complexity. Weighted sum of the first-order spatial derivatives of the flow field is used for regularization. Less regularization is performed where strong gradient information is available. The flow field at any point is interpolated more from those at neighboring points along the weaker intensity gradient component. Such intensity gradient weighted regularization leads to Euler-Lagrange equations with strong anisotropies coupled with discontinuities in their coefficients. A robust multilevel iterative technique, that recursively generates coarse-level problems based on intensity gradient weighted smoothing weights, is employed to estimate discontinuous optical flow field. Experimental results are presented to demonstrate the efficacy of the proposed technique  相似文献   

19.
Robust reweighted MAP motion estimation   总被引:2,自引:0,他引:2  
This paper proposes a motion estimation algorithm that is robust to motion discontinuity and noise. The proposed algorithm is constructed by embedding the least median squares (LMedS) of robust statistics into the maximum a posteriori (MAP) estimator. Difficulties in accurate estimation of the motion field arise from the smoothness constraint and the sensitivity to noise. To cope robustly with these problems, a median operator and the concept of reweighted least squares (RLS) are applied to the MAP motion estimator, resulting in the reweighted robust MAP (RRMAP). The proposed RRMAP motion estimation algorithm is also generalized for multiple image frame cases. Computer simulation with various synthetic image sequences shows that the proposed algorithm reduces errors, compared to three existing robust motion estimation algorithms that are based on M-estimation, total least squares (TLS), and Hough transform. It is also observed that the proposed algorithm is statistically efficient and robust to additive Gaussian noise and impulse noise. Furthermore, the proposed algorithm yields reasonable performance for real image sequences  相似文献   

20.
It is argued that accurate optical flow can only be determined if problems such as local motion ambiguity, motion segmentation, and occlusion detection are simultaneously addressed. To meet this requirement, a new multiresolution region-growing algorithm is proposed. This algorithm consists of a region-growing process which is able to segment the flow field in an image into homogeneous regions which are consistent with a linear affine flow model. To ensure stability and robustness in the presence of noise, this region-growing process is implemented within the hierarchical framework of a spatial lowpass pyramid. The results of applying this algorithm to both natural and synthetic image sequences are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号