首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study proposes a novel complete-order nonlinear structure and motion observer for monocular vision systems subjected to significant measurement noise. In contrast with previous studies that assume noise-free measurements, and require prior knowledge of either the relative motion of the camera or scene geometry, the proposed scheme assumes a single component of linear velocity as known. Under a persistency of excitation condition, the observer then relies on filtered estimates of optical flow to yield exponentially convergent estimates of the unknown motion parameters and feature depth that converge to a uniform, ultimate bound in the presence of measurement noise. The unknown linear and angular velocities are assumed to be generated using an imperfectly known model that incorporates a bounded uncertainty, and optical flow estimation is accomplished using a robust differentiator that is based on the sliding-mode technique. Numerical results are used to validate and demonstrate superior observer performance compared to an alternative leading design in the presence of model uncertainty and measurement noise.  相似文献   

2.
一种基于光流场重建三维运动和结构的新方法   总被引:3,自引:0,他引:3       下载免费PDF全文
提出了一种基于稀疏光流场计算三维运动和结构的线性新方法 ,该方法综合视觉运动分析中的两类处理方法 ,选取图象中的角点作为特征点 ;并检测和跟踪图象序列中的角点 .记录检测到的角点在图象序列中的位移 ,在理论上证明了时变图象的光流场可以近似地用角点的位移场代替 ,从而得到时变图象的稀疏光流场 ;通过光流运动模型的建立 ,推导出由稀疏光流场重建三维物体运动和结构的线性方法 .通过用真实图象序列验证该算法 ,表明该算法取得了较好的效果  相似文献   

3.
This paper considers observer design for systems modeled by linear partial differential equations (PDEs) of parabolic type, which may be subject to unknown inputs. The system is assumed to have only one spatial dimension, over which it is discretised to obtain what is referred to as the lattice system, which is a set of linear time invariant (LTI) ordinary differential equations (ODEs) having a canonical Toeplitz‐like structure with a specific sparsity pattern. This lattice structure is shown to be particularly appropriate for step‐by‐step sliding mode observer design that can reconstruct the state estimates at the points of discretisation and estimate the unknown input. Simulation results for both stable and unstable PDEs show that accurate state estimates can be provided at the points of discretisation. An approach to reconstruct the unknown input is demonstrated.  相似文献   

4.
This paper presents a novel nonlinear sliding-mode differentiator-based complete-order observer for structure and motion identification with a calibrated monocular camera. In comparison with earlier work that requires prior knowledge of either the Euclidean geometry of the observed object or the linear acceleration of the camera and is restricted to establishing stability and convergence from image-plane measurements of a single tracked feature, the proposed scheme assumes partial velocity state feedback to asymptotically identify the true-scale Euclidean coordinates of numerous observed object features and the unknown motion parameters. The dynamics of the motion parameters are assumed to be described by a model with unknown parameters that incorporates a bounded uncertainty, and a Lyapunov analysis is provided to prove that the observer yields exponentially convergent estimates that converge to a uniform ultimate bound under a generic persistency of excitation condition. Numerical and experimental results are obtained that demonstrate the robust performance of the current scheme in the presence of model error and measurement noise.  相似文献   

5.
A classical problem in machine vision is the range identification of an object moving in three-dimensional space from the two-dimensional image sequence obtained with a monocular camera. This study presents a novel reduced-order optical flow-based nonlinear observer that renders the proposed scheme suitable for depth estimation applications in both well-structured and unstructured environments. In this study, a globally exponentially stable observer is synthesized, where optical flow estimates are derived from tracking feature trajectory on the image plane over successive camera frames, to yield asymptotic estimates of feature depth at a desired convergence rate. Furthermore, the observer is shown to be finite-gain \(\mathcal {L}_{p}\) stable ?p∈[1,] in the presence of exogenous disturbance influencing camera motion, and is applicable to a wider class of perspective systems than those considered by alternative designs. The observer requires minor apriori system information for convergence, and the convergence condition arises in a natural manner with an apparently intuitive interpretation. Numerical and experimental studies are used to validate and demonstrate robust observer performance in the presence of significant measurement noise.  相似文献   

6.
A new algorithm for estimating motion from image sequences is presented. Initial motion estimates are determined based on a least-squares solution to a set of independent linear constraints on the motion at a pixel. These initial estimates are then improved by a nonlinear smoothing operation. The results of this algorithm are compared with those obtained by the Horn-Schunck algorithm on a number of image sequences.  相似文献   

7.
This paper presents a novel solution to the problem of depth estimation using a monocular camera undergoing known motion. Such problems arise in machine vision where the position of an object moving in three-dimensional space has to be identified by tracking motion of its projected feature on the two-dimensional image plane. The camera is assumed to be uncalibrated, and an adaptive observer yielding asymptotic estimates of focal length and feature depth is developed that precludes prior knowledge of scene geometry and is simpler than alternative designs. Experimental results using real camera imagery are obtained with the current scheme as well as the extended Kalman filter, and performance of the proposed observer is shown to be better than the extended Kalman filter-based framework.  相似文献   

8.
In this work we consider the application context of planar passive navigation in which the visual control of locomotion requires only the direction of translation, and not the full set of motion parameters. If the temporally changing optic array is represented as a vector field of optical velocities, the vectors form a radial pattern emanating from a centre point, called the Focus of Expansion (FOE), representing the heading direction. The FOE position is independent of the distances of world surfaces, and does not require assumptions about surface shape and smoothness. We investigate the performance of an artificial neural network for the computation of the image position of the FOE of an Optical Flow (OF) field induced by an observer translation relative to a static environment. The network is characterized by a feed-forward architecture, and is trained by a standard supervised back-propagation algorithm which receives as input the pattern of points where the lines generated by 2D vectors are projected using the Hough transform. We present results obtained on a test set of synthetic noisy optical flows and on optical flows computed from real image sequences.  相似文献   

9.
The estimation of dense velocity fields from image sequences is basically an ill-posed problem, primarily because the data only partially constrain the solution. It is rendered especially difficult by the presence of motion boundaries and occlusion regions which are not taken into account by standard regularization approaches. In this paper, the authors present a multimodal approach to the problem of motion estimation in which the computation of visual motion is based on several complementary constraints. It is shown that multiple constraints can provide more accurate flow estimation in a wide range of circumstances. The theoretical framework relies on Bayesian estimation associated with global statistical models, namely, Markov random fields. The constraints introduced here aim to address the following issues: optical flow estimation while preserving motion boundaries, processing of occlusion regions, fusion between gradient and feature-based motion constraint equations. Deterministic relaxation algorithms are used to merge information and to provide a solution to the maximum a posteriori estimation of the unknown dense motion field. The algorithm is well suited to a multiresolution implementation which brings an appreciable speed-up as well as a significant improvement of estimation when large displacements are present in the scene. Experiments on synthetic and real world image sequences are reported  相似文献   

10.
Optical Snow     
Classical methods for measuring image motion by computer have concentrated on the cases of optical flow in which the motion field is continuous, or layered motion in which the motion field is piecewise continuous. Here we introduce a third natural category which we call optical snow. Optical snow arises in many natural situations such as camera motion in a highly cluttered 3-D scene, or a passive observer watching a snowfall. Optical snow yields dense motion parallax with depth discontinuities occurring near all image points. As such, constraints on smoothness or even smoothness in layers do not apply. In the Fourier domain, optical snow yields a one-parameter family of planes which we call a bowtie. We present a method for measuring the parameters of the direction and range of speeds of the motion for the special case of parallel optical snow. We demonstrate the effectiveness of the method for both synthetic and real image sequences.Supplementary material to this paper is available in electronic form at http://dx.doi.org/10.1023/A:1024440524579  相似文献   

11.
Dynamic analysis of video sequences often relies on the segmentation of the sequence into regions of consistent motions. Approaching this problem requires a definition of which motions are regarded as consistent. Common approaches to motion segmentation usually group together points or image regions that have the same motion between successive frames (where the same motion can be 2D, 3D, or non-rigid). In this paper we define a new type of motion consistency, which is based on temporal consistency of behaviors across multiple frames in the video sequence. Our definition of consistent “temporal behavior” is expressed in terms of multi-frame linear subspace constraints. This definition applies to 2D, 3D, and some non-rigid motions without requiring prior model selection. We further show that our definition of motion consistency extends to data with directional uncertainty, thus leading to a dense segmentation of the entire image. Such segmentation is obtained by applying the new motion consistency constraints directly to covariance-weighted image brightness measurements. This is done without requiring prior correspondence estimation nor feature tracking.  相似文献   

12.
A kinematic model-based approach for the estimation of 3-D motion parameters from a sequence of noisy stereo images is discussed. The approach is based on representing the constant acceleration translational motion and constant precession rotational motion in the form of a bilinear state-space model using standard rectilinear states for translation and quaternions for rotation. Closed-form solutions of the state transition equations are obtained to propagate the quaternions. The measurements are noisy perturbations of 3-D feature points represented in an inertial coordinate system. It is assumed that the 3-D feature points are extracted from the stereo images and matched over the frames. Owing to the nonlinearity in the state model, nonlinear filters are designed for the estimation of motion parameters. Simulation results are included. The Cramer-Rao performance bounds for motion parameter estimates are computed. A constructive proof for the uniqueness of motion parameters is given. It is shown that with uniform sampling in time, three noncollinear feature points in five consecutive binocular image pairs contain all the spatial and temporal information. Both nondegenerate and degenerate motions are analyzed. A deterministic algorithm to recover motion parameters from a stereo image sequence is summarized from the constructive proof  相似文献   

13.
In this paper we propose a new model,Frenet-Serret motion, for the motion of an observer in a stationary environment. This model relates the motion parameters of the observer to the curvature and torsion of the path along which the observer moves. Screw-motion equations for Frenet-Serret motion are derived and employed for geometrical analysis of the motion. Normal flow is used to derive constraints on the rotational and translational velocity of the observer and to compute egomotion by intersecting these constraints in the manner proposed in (Duri and Aloimonos 1991) The accuracy of egomotion estimation is analyzed for different combinations of observer motion and feature distance. We explain the advantages of controlling feature distance to analyze egomotion and derive the constraints on depth which make either rotation or translation dominant in the perceived normal flow field. The results of experiments on real image sequences are presented.The support of the Air Force Office of Scientific Research under Grant F49620-93-1-0039 is gratefully acknowledged.  相似文献   

14.
This paper focuses on tracking, reconstruction and motion estimation of a well-defined MEMS optical switch from a microscopic view. For out-of-view reconstruction, a homography capable of transforming feature points and feature lines between a microscopic image and a CAD model of the switch is implemented. The homography between two sequential microscopic images is decomposed and factorized for motion estimation. Optical flow has also been explored to provide rough estimations of rotation centre and angle. The paper also illustrates motion parameter optimization principles to deal with uncertainty inherent in micro world. After non-linear optimization, estimation accuracy for rotation angle and rotation centre can reach 0.06° and pixel level, respectively.  相似文献   

15.
This paper presents a novel depth estimation method based on feature points. Two points are selected arbitrarily from an object and their distance in the space is assumed to be known.The proposed technique can estimate simultaneously their depths according to two images taken before and after a camera moves and the motion parameters of the camera may be unknown. In addition, this paper analyzes the ways to enhance the precision of the estimated depths and presents a feature point image coordinates search algorithm to increase the robustness of the proposed method.The search algorithm can find automatically more accurate image coordinates of the feature points based on their detected image coordinates. Experimental results demonstrate the efficiency of the presented method.  相似文献   

16.
The problem considered involves the use of a sequence of noisy monocular images of a three-dimensional moving object to estimate both its structure and kinematics. The object is assumed to be rigid, and its motion is assumed to be smooth. A set of object match points is assumed to be available, consisting of fixed features on the object, the image plane coordinates of which have been extracted from successive images in the sequence. Structure is defined as the 3-D positions of these object feature points, relative to each other. Rotational motion occurs about the origin of an object-centered coordinate system, while translational motion is that of the origin of this coordinate system. In this work, which is a continuation of the research done by the authors and reported previously (ibid., vol.PAMI-8, p.90-9, Jan. 1986), results of an experiment with real imagery are presented, involving estimation of 28 unknown translational, rotational, and structural parameters, based on 12 images with seven feature points  相似文献   

17.
无人机和车辆行驶等情况下拍摄的视频受外界影响会造成视频抖动。通过对比现有的电子稳像技术,提出了利用FAST获取特征点的位置信息,再通过光流法结合NCC匹配得到参考帧特征点在当前帧的位置信息,在此基础上,结合RANSAC算法剔除错误匹配的特征点对的改进算法。为了提高运动矢量估计的精度,应用加权最小二乘法得到相邻帧间的刚性变换矩阵,并经过卡尔曼滤波进行运动平滑得到扫描运动矢量并补偿,最终得到实时的稳定视频。实验表明,视频序列稳像后的帧间变换保真度有所提高,并且能够达到实时处理速度。  相似文献   

18.
捕捉与提取影像片断中的人物动作   总被引:1,自引:0,他引:1  
该文详细分析了运用自适应最小二乘法捕捉和提取影像中人物动作行为的实验过程。并通过实验总结出线性预测,迭代调整,恢复丢失特征点等方法,并对自适应最小二乘法进行了改进。实验结果已经应用于新加坡南洋理工大学对影像中的个体动作进行动画复原的项目中,对于其它图像处理应用领域也有广阔的应用前景。  相似文献   

19.
The problem of estimating motion and structure from a sequence of images has been a major research theme in machine vision for many years and remains one of the most challenging ones. In this work, we use sliding mode observers to estimate the motion and the structure of a moving body with the aid of a change-coupled device (CCD) camera. We consider a variety of dynamical systems which arise in machine vision applications and develop a novel identification procedure for the estimation of both constant and time-varying parameters. The basic procedure introduced for parameter estimation is to recast image feature dynamics linearly in terms of unknown parameters and construct a sliding mode observer to produce asymptotically correct estimates of the observed image features, and then use the observer input to compute parameters. Much of our analysis has been substantiated by computer simulations and real experiments.  相似文献   

20.
在综合视觉运动分析中的两类处理方法,选取图像中的角点作为特征点,并检测和跟踪图像序列中的角点。记录检测到的角点在图像序列中的位移,在理论上证明了时变图像的光流场可以近似地用角点的位移场代替,同时给出这种替代的两个前提条件。本文用真实图像序列验证提出的算法,实验结果表明该算法取得了较好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号