首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A kinematic model-based approach for the estimation of 3-D motion parameters from a sequence of noisy stereo images is discussed. The approach is based on representing the constant acceleration translational motion and constant precession rotational motion in the form of a bilinear state-space model using standard rectilinear states for translation and quaternions for rotation. Closed-form solutions of the state transition equations are obtained to propagate the quaternions. The measurements are noisy perturbations of 3-D feature points represented in an inertial coordinate system. It is assumed that the 3-D feature points are extracted from the stereo images and matched over the frames. Owing to the nonlinearity in the state model, nonlinear filters are designed for the estimation of motion parameters. Simulation results are included. The Cramer-Rao performance bounds for motion parameter estimates are computed. A constructive proof for the uniqueness of motion parameters is given. It is shown that with uniform sampling in time, three noncollinear feature points in five consecutive binocular image pairs contain all the spatial and temporal information. Both nondegenerate and degenerate motions are analyzed. A deterministic algorithm to recover motion parameters from a stereo image sequence is summarized from the constructive proof  相似文献   

2.
Traditional optical flow algorithms assume local image translational motion and apply simple image filtering techniques. Recent studies have taken two separate approaches toward improving the accuracy of computed flow: the application of spatio-temporal filtering schemes and the use of advanced motion models such as the affine model. Each has achieved some improvement over traditional algorithms in specialized situations but the computation of accurate optical flow for general motion has been elusive. In this paper, we exploit the interdependency between these two approaches and propose a unified approach. The general motion model we adopt characterizes arbitrary 3-D steady motion. Under perspective projection, we derive an image motion equation that describes the spatio-temporal relation of gray-scale intensity in an image sequence, thus making the utilization of 3-D filtering possible. However, to accommodate this motion model, we need to extend the filter design to derive additional motion constraint equations. Using Hermite polynomials, we design differentiation filters, whose orthogonality and Gaussian derivative properties insure numerical stability; a recursive relation facilitates application of the general nonlinear motion model while separability promotes efficiency. The resulting algorithm produces accurate optical flow and other useful motion parameters. It is evaluated quantitatively using the scheme established by Barron et al. (1994) and qualitatively with real images.  相似文献   

3.
基于遗传算法的直线光流刚体运动重建   总被引:1,自引:0,他引:1  
建立一种新的基于直线光流场从单目图像序列恢复刚体运动和结构的模型,推导出直线光流场与刚体的运动参数之间的关系,用2个二阶线性微分方程表达这种关系,并提出一种求解刚体运动参数的遗传算法,只需要获得图像平面的2条直线光流即可求解刚体的旋转参数,并用合成图像测试了该算法的有效性。  相似文献   

4.
This paper addresses the use of spatio-temporal transform methods applied to the analysis of dynamic image sequences and the characterization of image motion. The image motion including a divergent component (resulting from a looming camera component) is analyzed in the spatio-temporal Mellin transform (MT) domain, resulting in the separation of the spectrum into two parts: a structural term corresponding to the spatial MT of the static image, and a kinematic term depending on time-to-collision (a motion support). We examine potential applications of this property for the recovery of image motion from integral image brightness measurements and the computation of time-to-collision using spatio-temporal MT analysis  相似文献   

5.
A novel procedure is presented to construct image-domain filters (receptive fields) that directly recover local motion and shape parameters. These receptive fields are derived from training on image deformations that best discriminate between different shape and motion parameters. Beginning with the construction of 1-D receptive fields that detect local surface shape and motion parameters within cross sections, we show how the recovered model parameters are sufficient to produce local estimates of optical flow, focus of expansion, and time to collision. The theory is supported by a series of experiments on well-known image sequences for which ground truth is available. Comparisons against published results are quite competitive, which we believe to be significant given the local, feed-forward nature of the resulting algorithms.  相似文献   

6.
The authors present an iterative algorithm for the recovery of 2-D motion, i.e., for the determination of a transformation that maps one image onto another. The local ambiguity in measuring the motion of contour segments (the aperture problem) implies a reliance on measurements along the normal direction. Since the measured normal flow does not agree with the actual normal flow, the full flow recovered from this erroneous flow also possesses substantial error, and any attempt to recover the 3-D motion from such full flow fails. The proposed method is based on the observation that a polynomial approximation of the image flow provides sufficient information for 3-D motion computation. The use of an explicit flow model results in improved normal flow estimates through an iterative process. The authors discuss the adequacy and the convergence of the algorithm. The algorithm was tested on some synthetic and some simple natural time-varying images. The image flow recovered from this scheme is sufficiently accurate to be useful in 3-D structure and motion computation  相似文献   

7.
8.
We present methods for estimating forces which drive motion observed in density image sequences. Using these forces, we also present methods for predicting velocity and density evolution. To do this, we formulate and apply a Minimum Energy Flow (MEF) method which is capable of estimating both incompressible and compressible flows from time-varying density images. Both the MEF and force-estimation techniques are applied to experimentally obtained density images, spanning spatial scales from micrometers to several kilometers. Using density image sequences describing cell splitting, for example, we show that cell division is driven by gradients in apparent pressure within a cell. Using density image sequences of fish shoals, we also quantify 1) intershoal dynamics such as coalescence of fish groups over tens of kilometers, 2) fish mass flow between different parts of a large shoal, and 3) the stresses acting on large fish shoals.  相似文献   

9.
阐述了一种快速而高效的由视频图象或视频图象序列生成全景的配准方法,为了估计图象配准的校正参数,该方法计算伪运动矢量,这些伪运动矢量是光流在每一选定象素处的粗略估计,使用方法,实现了一个在低价PC上就能实时创建和显示全景图像的软件。  相似文献   

10.
Three-dimensional scene flow   总被引:2,自引:0,他引:2  
Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that the same fundamental limitation applies to scene flow; however, many cameras are used to image the scene. There are then two choices when computing scene flow: 1) perform the regularization in the images or 2) perform the regularization on the surface of the object in the scene. In this paper, we choose to compute scene flow using regularization in the images. We describe three algorithms, the first two for computing scene flow from optical flows and the third for constraining scene structure from the inconsistencies in multiple optical flows.  相似文献   

11.
A theory of the motion fields of curves   总被引:6,自引:6,他引:0  
This article reports a study of the motion field generated by moving 3-D curves that are observed by a camera. We first discuss the relationship between optical flow and motion field and show that the assumptions made in the computation of the optical flow are a bit difficult to defend.We then go ahead to study the motion field of a general curve. We first study the general case of a curve moving nonrigidly and introduce the notion of isometric motion. In order to do this, we introduce the notion of spatiotemporal surface and study its differential properties up to the second order. We show that, contrary to what is commonly believed, the full motion field of the curve (i.e., the component tangent to the curve) cannot be recovered from this surface. We also give the equations that characterize the spatio-temporal surface completely up to a rigid transformation. Those equations are the expressions of the first and second fundamental forms and the Gauss and Codazzi-Mainardi equations. We then relate those differential expressions computed on the spatio-temporal surface to quantities that can be computed from the images intensities. The actual values depend upon the choice of the edge detector.We then show that the hypothesis of a rigid 3-D motion allows in general to recover the structure and the motion of the curve, in fact without explicitly computing the tangential motion field, at the cost of introducing the three-dimensional accelerations. We first study the motion field generated by the simplest kind of rigid 3-D curves, namely lines. This study is illuminating in that it paves the way for the study of general rigid curves and because of the useful results which are obtained. We then extend the results obtained in the case of lines to the case of general curves and show that at each point of the image curve two equations can be written relating the kinematic screw of the moving 3-D curve and its time derivative to quantities defined in the study of the general nonrigid motion that can be measured from the spatio-temporal surface and therefore from the image. This shows that the structure and the motion of the curve can be recovered from six image points only, without establishing any point correspondences.Finally we study the cooperation between motion and stereo in the framework of this theory. The use of two cameras instead of one allows us to get rid of the three-dimensional accelerations and the relations between the two spatio-temporal surfaces of the same rigidly moving 3-D curve can be used to help disambiguate stereo correspondences.  相似文献   

12.
This paper is concerned with three-dimensional (3D) analysis, and analysis-guided syntheses, of images showing 3-D motion of an observer relative to a scene. There are two objectives of the paper. First, it presents an approach to recovering 3D motion and structure parameters from multiple cues present in a monocular image sequence, such as point features, optical flow, regions, lines, texture gradient, and vanishing line. Second, it introduces the notion that the cues that contribute the most to 3-D interpretation are also the ones that would yield the most realistic synthesis, thus suggesting an approach to analysis guided 3-D representation. For concreteness, the paper focuses on flight image sequences of a planar, textured surface. The integration of information in these diverse cues is carried out using optimization. For reliable estimation, a sequential batch method is used to compute motion and structure. Synthesis is done by using (i) image attributes extracted from the image sequence, and (ii) simple, artificial image attributes which are not present in the original images. For display, real and/or artificial attributes are shown as a monocular or a binocular sequence. Performance evaluation is done through experiments with one synthetic sequence, and two real image sequences digitized from a commercially available video tape and a laserdisc. The attribute based representation of these sequences compressed their sizes by 502 and 367. The visualization sequence appears very similar to the original sequence in informal, monocular as well as stereo viewing on a workstation monitor  相似文献   

13.
A traditional approach to extracting geometric information from a large scene is to compute multiple 3-D depth maps from stereo pairs or direct range finders, and then to merge the 3-D data. However, the resulting merged depth maps may be subject to merging errors if the relative poses between depth maps are not known exactly. In addition, the 3-D data may also have to be resampled before merging, which adds additional complexity and potential sources of errors.This paper provides a means of directly extracting 3-D data covering a very wide field of view, thus by-passing the need for numerous depth map merging. In our work, cylindrical images are first composited from sequences of images taken while the camera is rotated 360° about a vertical axis. By taking such image panoramas at different camera locations, we can recover 3-D data of the scene using a set of simple techniques: feature tracking, an 8-point structure from motion algorithm, and multibaseline stereo. We also investigate the effect of median filtering on the recovered 3-D point distributions, and show the results of our approach applied to both synthetic and real scenes.  相似文献   

14.
This paper proposes a new neural algorithm to perform the segmentation of an observed scene into regions corresponding to different moving objects by analyzing a time-varying images sequence.The method consists of a classification step,where the motion of small patches is characterized through an optimization approach,and a segmentation step merging meighboring patches characterized by the same motion.Classification of motion is performed without optical flow computation,but considering only the spatial and temporal image gradients into an appropriate energy function minimized with a Hopfield-like neural network giving as output directly the 3D motion parameter estimates.Network convergence is accelerated by integrating the quantitative estimation of motion parameters with a qualitative estimate of dominant motion using the geometric theory of differential equations.  相似文献   

15.
在分析相关视图扫描线特征的基础上,提出了一种基于扫描线分段图像稠密匹配方法。依据灰度特征,将校正图像扫描线划分为灰度相似的像素段,重构以灰度平均的像素段为基元的新扫描线,在计算像素段匹配的基础上进行图像稠密匹配,以降低视图稠密匹配过程中的歧义性,提高匹配效率,最后给出了实验结果。  相似文献   

16.
Differential optical flow methods allow the estimation of optical flow fields based on the first-order and even higher-order spatio-temporal derivatives (gradients) of sequences of input images. If the input images are noisy, for instance because of the limited quality of the capturing devices or due to poor illumination conditions, the use of partial derivatives will amplify that noise and thus end up affecting the accuracy of the computed flow fields. The typical approach in order to reduce that noise consists of smoothing the required gradient images with Gaussian filters, for instance by applying structure tensors. However, that filtering is isotropic and tends to blur the discontinuities that may be present in the original images, thus likely leading to an undesired loss of accuracy in the resulting flow fields. This paper proposes the use of tensor voting as an alternative to Gaussian filtering, and shows that the discontinuity preserving capabilities of the former yield more robust and accurate results. In particular, a state-of-the-art variational optical flow method has been adapted in order to utilize a tensor voting filtering approach. The proposed technique has been tested upon different datasets of both synthetic and real image sequences, and compared to both well known and state-of-the-art differential optical flow methods.  相似文献   

17.
This paper explores an interesting image projection produced by scanning dynamic scenes with a slit camera. Based on the concept of Anorthoscopic Perception, we investigate how a two-dimensionalDynamic Projection Imageof three-dimensional scenes is generated from consecutive 1-D snapshots taken through a slit, when the relative motion is homogeneous between the viewer and scenes. By moving the camera in the 3-D environment or rotating an object, we can obtain various dynamic projection images. These dynamic projection images contain major spatial and temporal information about 3-D scenes in a small amount of data. Consequently, the projection is suited for the memorization, registration, and indexing of image sequences. The generated images also directly show some of the motion properties in dynamic scenes. If a relative motion between the camera and a subject is planned properly, the dynamic projection image can even provide a texture image of the subject along with some expected photometry characteristics. Therefore, the dynamic projection can facilitate dynamic object recognition, 3-D structure acquisition, and image compression, all for a stable motion between the objects and camera. We outline various applications in vision, robotics, and multimedia and summarize the motion types and the camera setting for generating such dynamic projection images.  相似文献   

18.
Motion field and optical flow: qualitative properties   总被引:7,自引:0,他引:7  
It is shown that the motion field the 2-D vector field which is the perspective projection on the image plane of the 3-D velocity field of a moving scene, and the optical flow, defined as the estimate of the motion field which can be derived from the first-order variation of the image brightness pattern, are in general different, unless special conditions are satisfied. Therefore, dense optical flow is often ill-suited for computing structure from motion and for reconstructing the 3-D velocity field by algorithms which require a locally accurate estimate of the motion field. A different use of the optical flow is suggested. It is shown that the (smoothed) optical flow and the motion field can be interpreted as vector fields tangent to flows of planar dynamical systems. Stable qualitative properties of the motion field, which give useful informations about the 3-D velocity field and the 3-D structure of the scene, usually can be obtained from the optical flow. The idea is supported by results from the theory of structural stability of dynamical systems  相似文献   

19.
We address the problem of estimating three-dimensional motion, and structure from motion with an uncalibrated moving camera. We show that point correspondences between three images, and the fundamental matrices computed from these point correspondences, are sufficient to recover the internal orientation of the camera (its calibration), the motion parameters, and to compute coherent perspective projection matrices which enable us to reconstruct 3-D structure up to a similarity. In contrast with other methods, no calibration object with a known 3-D shape is needed, and no limitations are put upon the unknown motions to be performed or the parameters to be recovered, as long as they define a projective camera.The theory of the method, which is based on the constraint that the observed points are part of a static scene, thus allowing us to link the intrinsic parameters and the fundamental matrix via the absolute conic, is first detailed. Several algorithms are then presented, and their performances compared by means of extensive simulations and illustrated by several experiments with real images.  相似文献   

20.
Direct passive navigation   总被引:3,自引:0,他引:3  
In this correspondence, we show how to recover the motion of an observer relative to a planar surface from image brightness derivatives. We do not compute the optical flow as an intermediate step, only the spatial and temporal brightness gradients (at a minimum of eight points). We first present two iterative schemes for solving nine nonlinear equations in terms of the motion and surface parameters that are derived from a least-squares fomulation. An initial pass over the relevant image region is used to accumulate a number of moments of the image brightness derivatives. All of the quantities used in the iteration are efficiently computed from these totals without the need to refer back to the image. We then show that either of two possible solutions can be obtained in closed form. We first solve a linear matrix equation for the elements of a 3 × 3 matrix. The eigenvalue decomposition of the symmetric part of the matrix is then used to compute the motion parameters and the plane orientation. A new compact notation allows us to show easily that there are at most two planar solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号