首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Observer design for range and orientation identification   总被引:1,自引:0,他引:1  
A reduced-order globally convergent observer to estimate the depth of an object projected on the image plane of a camera is presented, assuming that the object is planar or has a planar surface and the orientation of the plane is known. A locally convergent observer can be obtained when the plane unit normal is unknown, and the latter is estimated together with the depth of the object. The observer exploits the image moments of the object as measured features. The estimation is achieved by rendering attractive and invariant a manifold in the extended state space of the system and the observer. The problem is reduced to the solution of a system of partial differential equations. The solution of the partial differential equations can become a difficult task, hence it is shown that this issue can be resolved by adding to the observer an output filter and a dynamic scaling parameter.  相似文献   

2.
In this note, a new observer is developed to determine range information (and, hence, the three-dimensional (3-D) coordinates) of an object feature moving with affine motion dynamics (or the more general Ricatti motion dynamics) with known motion parameters. The unmeasurable range information is determined from a single camera provided an observability condition is satisfied that has physical significance. To develop the observer, the perspective system is expressed in terms of the nonlinear feature dynamics. The structure of the proposed observer is inspired by recent disturbance observer results. The proposed technique facilitates a Lyapunov-based analysis that is less complex than the sliding-mode based analysis derived for recent observer designs. The analysis demonstrates that the 3-D task-space coordinates of the feature point can be asymptotically identified. Simulation results are provided that illustrate the performance of the observer in the presence of noise.  相似文献   

3.
A classical problem in machine vision is the range identification of an object moving in three-dimensional space from the two-dimensional image sequence obtained with a monocular camera. This study presents a novel reduced-order optical flow-based nonlinear observer that renders the proposed scheme suitable for depth estimation applications in both well-structured and unstructured environments. In this study, a globally exponentially stable observer is synthesized, where optical flow estimates are derived from tracking feature trajectory on the image plane over successive camera frames, to yield asymptotic estimates of feature depth at a desired convergence rate. Furthermore, the observer is shown to be finite-gain \(\mathcal {L}_{p}\) stable ?p∈[1,] in the presence of exogenous disturbance influencing camera motion, and is applicable to a wider class of perspective systems than those considered by alternative designs. The observer requires minor apriori system information for convergence, and the convergence condition arises in a natural manner with an apparently intuitive interpretation. Numerical and experimental studies are used to validate and demonstrate robust observer performance in the presence of significant measurement noise.  相似文献   

4.
In order to calibrate cameras in an accurate manner, lens distortion models have to be included in the calibration procedure. Usually, the lens distortion models used in camera calibration depend on radial functions of image pixel coordinates. Such models are well-known, simple and can be estimated using just image information. However, these models do not take into account an important physical constraint of lens distortion phenomena, namely: the amount of lens distortion induced in an image point depends on the scene point depth with respect to the camera projection plane. In this paper we propose a new accurate depth dependent lens distortion model. To validate this approach, we apply the new lens distortion model to camera calibration in planar view scenarios (that is 3D scenarios where the objects of interest lie on a plane). We present promising experimental results on planar pattern images and on sport event scenarios. Nevertheless, although we emphasize the feasibility of the method for planar view scenarios, the proposed model is valid in general and can be used in any scenario where the point depth can be estimated.  相似文献   

5.
The problem of estimating motion and structure from a sequence of images has been a major research theme in machine vision for many years and remains one of the most challenging ones. In this work, we use sliding mode observers to estimate the motion and the structure of a moving body with the aid of a change-coupled device (CCD) camera. We consider a variety of dynamical systems which arise in machine vision applications and develop a novel identification procedure for the estimation of both constant and time-varying parameters. The basic procedure introduced for parameter estimation is to recast image feature dynamics linearly in terms of unknown parameters and construct a sliding mode observer to produce asymptotically correct estimates of the observed image features, and then use the observer input to compute parameters. Much of our analysis has been substantiated by computer simulations and real experiments.  相似文献   

6.
The problem of identifying motion and shape parameters of a planar object undergoing a Riccati motion, from the associated optical flow generated on the image plane of a single CCD camera, is studied. The optical flow is generated by projecting feature points on the object onto the image plane via perspective and orthographic projections. An important result we show is that, under perspective projection, the parameters of a specific Riccati dynamics that extend the well-known “rigid motion” can be identified up to choice of a sign. The paper also discusses other Riccati equations obtained from quadratic extension of a rigid motion and affine motion. For each of the various motion models considered and for each of the two projection models, we show that the extent to which motion and shape parameters can be recovered from optical flow can in fact be recovered from the linear approximation of the optical flow. We also extend our analysis to a pair of cameras  相似文献   

7.
Uncalibrated obstacle detection using normal flow   总被引:2,自引:0,他引:2  
This paper addresses the problem of obstacle detection for mobile robots. The visual information provided by a single on-board camera is used as input. We assume that the robot is moving on a planar pavement, and any point lying outside this plane is treated as an obstacle. We address the problem of obstacle detection by exploiting the geometric arrangement between the robot, the camera, and the scene. During an initialization stage, we estimate an inverse perspective transformation that maps the image plane onto the horizontal plane. During normal operation, the normal flow is computed and inversely projected onto the horizontal plane. This simplifies the resultant flow pattern, and fast tests can be used to detect obstacles. A salient feature of our method is that only the normal flow information, or first order time-and-space image derivatives, is used, and thus we cope with the aperture problem. Another important issue is that, contrasting with other methods, the vehicle motion and intrinsic and extrinsic parameters of the camera need not be known or calibrated. Both translational and rotational motion can be dealt with. We present motion estimation results on synthetic and real-image data. A real-time version implemented on a mobile robot, is described.  相似文献   

8.
This paper presents a novel solution to the problem of depth estimation using a monocular camera undergoing known motion. Such problems arise in machine vision where the position of an object moving in three-dimensional space has to be identified by tracking motion of its projected feature on the two-dimensional image plane. The camera is assumed to be uncalibrated, and an adaptive observer yielding asymptotic estimates of focal length and feature depth is developed that precludes prior knowledge of scene geometry and is simpler than alternative designs. Experimental results using real camera imagery are obtained with the current scheme as well as the extended Kalman filter, and performance of the proposed observer is shown to be better than the extended Kalman filter-based framework.  相似文献   

9.
In this paper, we describe a real-time algorithm for computing the ego-motion of a vehicle relative to the road. The algorithm uses as input only those images provided by a single omnidirectional camera mounted on the roof of the vehicle. The front ends of the system are two different trackers. The first one is a homography-based tracker that detects and matches robust scale-invariant features that most likely belong to the ground plane. The second one uses an appearance-based approach and gives high-resolution estimates of the rotation of the vehicle. This planar pose estimation method has been successfully applied to videos from an automotive platform. We give an example of camera trajectory estimated purely from omnidirectional images over a distance of 400 m. For performance evaluation, the estimated path is superimposed onto a satellite image. In the end, we use image mosaicing to obtain a textured 2-D reconstruction of the estimated path.   相似文献   

10.
Monitoring of large sites requires coordination between multiple cameras, which in turn requires methods for relating events between distributed cameras. This paper tackles the problem of automatic external calibration of multiple cameras in an extended scene, that is, full recovery of their 3D relative positions and orientations. Because the cameras are placed far apart, brightness or proximity constraints cannot be used to match static features, so we instead apply planar geometric constraints to moving objects tracked throughout the scene. By robustly matching and fitting tracked objects to a planar model, we align the scene's ground plane across multiple views and decompose the planar alignment matrix to recover the 3D relative camera and ground plane positions. We demonstrate this technique in both a controlled lab setting where we test the effects of errors in the intrinsic camera parameters, and in an uncontrolled, outdoor setting. In the latter, we do not assume synchronized cameras and we show that enforcing geometric constraints enables us to align the tracking data in time. In spite of noise in the intrinsic camera parameters and in the image data, the system successfully transforms multiple views of the scene's ground plane to an overhead view and recovers the relative 3D camera and ground plane positions  相似文献   

11.
A Theory of Specular Surface Geometry   总被引:1,自引:1,他引:0  
A theoretical framework is introduced for the perception of specular surface geometry. When an observer moves in three-dimensional space, real scene features such as surface markings remain stationary with respect to the surfaces they belong to. In contrast, a virtual feature which is the specular reflection of a real feature, travels on the surface. Based on the notion of caustics, a feature classification algorithm is developed that distinguishes real and virtual features from their image trajectories that result from observer motion. Next, using support functions of curves, a closed-form relation is derived between the image trajectory of a virtual feature and the geometry of the specular surface it travels on. It is shown that, in the 2D case, where camera motion and the surface profile are coplanar, the profile is uniquely recovered by tracking just two unknown virtual features. Finally, these results are generalized to the case of arbitrary 3D surface profiles that are traveled by virtual features when camera motion is not confined to a plane. This generalization includes a number of mathematical results that substantially enhance the present understanding of specular surface geometry. An algorithm is developed that uniquely recovers 3D surface profiles using a single virtual feature tracked from the occluding boundary of the object. All theoretical derivations and proposed algorithms are substantiated by experiments.  相似文献   

12.
B. Zhang  Y.H. Wu 《Pattern recognition》2007,40(4):1368-1377
Self-recalibration of the relative pose in a vision system plays a very important role in many applications and much research has been conducted on this issue over the years. However, most existing methods require information of some points in general three-dimensional positions for the calibration, which is hard to be met in many practical applications. In this paper, we present a new method for the self-recalibration of a structured light system by a single image in the presence of a planar surface in the scene. Assuming that the intrinsic parameters of the camera and the projector are known from initial calibration, we show that their relative position and orientation can be determined automatically from four projection correspondences between an image and a projection plane. In this method, analytical solutions are obtained from second order equations with a single variable and the optimization process is very fast. Another advantage is the enhanced robustness in implementation via the use of over constrained systems. Computer simulations and real data experiments are carried out to validate our method.  相似文献   

13.
In this paper, we address the problem of ego-motion estimation by fusing visual and inertial information. The hardware consists of an inertial measurement unit (IMU) and a monocular camera. The camera provides visual observations in the form of features on a horizontal plane. Exploiting the geometric constraint of features on the plane into visual and inertial data, we propose a novel closed form measurement model for this system. Our first contribution in this paper is an observability analysis of the proposed planar-based visual inertial navigation system (VINS). In particular, we prove that the system has only three unobservable states corresponding to global translations parallel to the plane, and rotation around the gravity vector. Hence, compared to general VINS, an advantage of using features on the horizontal plane is that the vertical translation along the normal of the plane becomes observable. As the second contribution, we present a state-space formulation for the pose estimation in the analyzed system and solve it via a modified unscented Kalman filter (UKF). Finally, the findings of the theoretical analysis and 6-DoF motion estimation are validated by simulations as well as using experimental data.  相似文献   

14.
基于图像的视觉伺服可用于对机械臂的运动进行有效的控制。然而,正如许多研究者指出的,当初始位置和期望位置相距较远时,此种控制策略将因其局部特性而存在收敛性、稳定性问题。通过在图像平面内定义充分的图像特征轨迹,并对这些轨迹进行跟踪,我们可以充分利用基于图像的视觉伺服所固有的局部收敛性及稳定性特性这一优势,从而避免初始位置与期望位置相距较远时所面临的问题。因此,近年来,图像空间路径规划已成为机器人领域的一个热点研究问题。但是,目前几乎所有的有关结果均是针对手眼视觉系统提出的。本文将针对场景摄像机视觉系统提出一种未标定视觉路径规划算法。此算法在射影空间中直接计算图像特征的轨迹,这样可保证它们与刚体运动一致。通过将旋转及平移运动的射影表示分解为规范化形式,我们可以很容易地对其射影空间内的路径进行插值。在此之后,图像平面中的图像特征轨迹可通过射影路径产生。通过这种方式,此算法并不需要特征点结构和摄像机内部参数的有关知识。为了验证所提算法的可行性及系统性能,本文最后给出了基于PUMA560机械臂的仿真研究结果。  相似文献   

15.
Multi-frame estimation of planar motion   总被引:4,自引:0,他引:4  
Traditional plane alignment techniques are typically performed between pairs of frames. We present a method for extending existing two-frame planar motion estimation techniques into a simultaneous multi-frame estimation, by exploiting multi-frame subspace constraints of planar surfaces. The paper has three main contributions: 1) we show that when the camera calibration does not change, the collection of all parametric image motions of a planar surface in the scene across multiple frames is embedded in a low dimensional linear subspace; 2) we show that the relative image motion of multiple planar surfaces across multiple frames is embedded in a yet lower dimensional linear subspace, even with varying camera calibration; and 3) we show how these multi-frame constraints can be incorporated into simultaneous multi-frame estimation of planar motion, without explicitly recovering any 3D information, or camera calibration. The resulting multi-frame estimation process is more constrained than the individual two-frame estimations, leading to more accurate alignment, even when applied to small image regions.  相似文献   

16.
针对现有室内环境存在多个平面特征的特点,提出了一种使用平面特征优化相机位姿和建图的实时室内RGB-D 同时定位与地图构建(SLAM)系统。系统在前端采用迭代最近点(ICP)算法和直接法联合估计相机位姿,在后端提取相机关键帧平面特征并建立多种基于平面特征的约束关系,优化相机关键帧的位姿和平面特征参数,同时增量式地构建环境的平面结构模型。在多个公开数据序列上的实验结果表明,在平面信息丰富的环境中,平面特征约束能够减小位姿估计的累积误差,系统能够构建场景平面模型并只消耗少量的存储空间,在真实环境下的实验结果验证了系统在室内增强现实领域具有可行性和应用价值。  相似文献   

17.
This paper presents a new adaptive controller for image-based dynamic control of a robot manipulator using a fixed camera whose intrinsic and extrinsic parameters are not known. To map the visual signals onto the joints of the robot manipulator, this paper proposes a depth-independent interaction matrix, which differs from the traditional interaction matrix in that it does not depend on the depths of the feature points. Using the depth-independent interaction matrix makes the unknown camera parameters appear linearly in the closed-loop dynamics so that a new algorithm is developed to estimate their values on-line. This adaptive algorithm combines the Slotine-Li method with on-line minimization of the errors between the real and estimated projections of the feature points on the image plane. Based on the nonlinear robot dynamics, we prove asymptotic convergence of the image errors to zero by the Lyapunov theory. Experiments have been conducted to verify the performance of the proposed controller. The results demonstrated good convergence of the image errors.  相似文献   

18.
In the field of augmented reality (AR), many kinds of vision-based extrinsic camera parameter estimation methods have been proposed to achieve geometric registration between real and virtual worlds. Previously, a feature landmark-based camera parameter estimation method was proposed. This is an effective method for implementing outdoor AR applications because a feature landmark database can be automatically constructed using the structure-from-motion (SfM) technique. However, the previous method cannot work in real time because it entails a high computational cost or matching landmarks in a database with image features in an input image. In addition, the accuracy of estimated camera parameters is insufficient for applications that need to overlay CG objects at a position close to the user's viewpoint. This is because it is difficult to compensate for visual pattern change of close landmarks when only the sparse depth information obtained by the SfM is available. In this paper, we achieve fast and accurate feature landmark-based camera parameter estimation by adopting the following approaches. First, the number of matching candidates is reduced to achieve fast camera parameter estimation by tentative camera parameter estimation and by assigning priorities to landmarks. Second, image templates of landmarks are adequately compensated for by considering the local 3-D structure of a landmark using the dense depth information obtained by a laser range sensor. To demonstrate the effectiveness of the proposed method, we developed some AR applications using the proposed method.  相似文献   

19.
一种简单快速的相机标定新方法   总被引:2,自引:0,他引:2  
本文提出了一种新的相机自标定方法,该方法要求摄像机在3个(或3个以上)不同方位摄取一个包含其内接正三角形的圆的新型标定模板的图像.首先,从模板图像中推导得到圆环点的像点坐标;然后通过得到的圆环点像点坐标,可线性求解摄像机内参数.与传统方法不同的是,该方法避免了复杂的椭圆拟合和直线拟合,降低了计算复杂度,提高了标定速度和...  相似文献   

20.
目的 提出一种定位图像匹配尺度及区域的有效算法,通过实现当前屏幕图像特征点与模板图像中对应尺度下部分区域中的特征点匹配,实现摄像机对模板图像的实时跟踪,解决3维跟踪算法中匹配精度与效率问题。方法 在预处理阶段,算法对模板图像建立多尺度表示,各尺度下的图像进行区域划分,在每个区域内采用ORB(oriented FAST and rotated BRIEF)方法提取特征点并生成描述子,由此构建图像特征点的分级分区管理模式。在实时跟踪阶段,对于当前摄像机获得的图像,首先定位该图像所对应的尺度范围,在相应尺度范围内确定与当前图像重叠度大的图像区域,然后将当前图像与模板图像对应的尺度与区域中的特征点集进行匹配,最后根据匹配点对计算摄像机的位姿。结果 利用公开图像数据库(stanford mobile visual search dataset)中不同分辨率的模板图像及更多图像进行实验,结果表明,本文算法性能稳定,配准误差在1个像素左右;系统运行帧率总体稳定在2030 帧/s。结论 与多种经典算法对比,新方法能够更好地定位图像匹配尺度与区域,采用这种局部特征点匹配的方法在配准精度与计算效率方面比现有方法有明显提升,并且当模板图像分辨率较高时性能更好,特别适合移动增强现实应用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号