首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Camera calibration is a fundamental process for both photogrammetric and computer vision. Since the arrival of the direct linear transformation method and its later revisions, new methods have been developed by several authors, such as: Tsai, Heikkilä and Zhang. Most of these have been based on the pinhole model, including distortion correction. Some of these methods, such as Tsai method, allow the use of two different techniques for determining calibration parameters: a non-coplanar calibration technique using three-dimensional (3D) calibration objects, and a coplanar technique that uses two-dimensional (2D) calibration objects. The calibration performed by observing a 3D calibration object has good accuracy, and produces very efficient results; however, the calibration object must be accurate enough and requires an elaborate configuration. In contrast, the use of 2D calibration objects yields less accurate results, is much more flexible, and does not require complex calibration objects that are costly to produce. This article compares these two different calibration procedures from the perspective of stereo measurement. Particular attention was focused on the accuracy of the calculated camera parameters, the reconstruction error in the computer image coordinates and in the world coordinate system and advanced image-processing techniques for subpixel detection during the comparison. The purpose of this work is to establish a basis and selection criteria for choosing one of these techniques for camera calibration, according to the accuracy required in each of the many applications using photogrammetric vision: robot calibration methods, trajectory generation algorithms, articulated measuring arm calibration, and photogrammetric systems.  相似文献   

2.
In order for a binocular head to perform optimal 3D tracking, it should be able to verge its cameras actively, while maintaining geometric calibration. In this work we introduce a calibration update procedure, which allows a robotic head to simultaneously fixate, track, and reconstruct a moving object in real-time. The update method is based on a mapping from motor-based to image-based estimates of the camera orientations, estimated in an offline stage. Following this, a fast online procedure is presented to update the calibration of an active binocular camera pair. The proposed approach is ideal for active vision applications because no image-processing is needed at runtime for the scope of calibrating the system or for maintaining the calibration parameters during camera vergence. We show that this homography-based technique allows an active binocular robot to fixate and track an object, whilst performing 3D reconstruction concurrently in real-time.  相似文献   

3.
双目立体视觉中在对物体进行三维测量或精准定位时,需要对摄像机进行标定以获得其内外参数。研究径向畸变摄像机模型,构造了基于一阶径向畸变(RAC)算法的双目摄像机内外参数线性求解公式。考虑侧倾角、旋转角、俯仰角以及透镜的主要畸变因素,修正了传统RAC标定法中只考虑径向畸变、部分参数需要先验值的缺陷。利用标定所得内、外参数进行了多位姿双目摄像机三维重构实验。实验结果表明,该标定方法重投影误差分布在[-0.3,0.3],动态识别结果与实际运行轨迹重合率为96%,对降低双目立体视觉三维测量误差率有积极性影响。  相似文献   

4.
This paper presents a 3D contour reconstruction approach employing a wheeled mobile robot equipped with an active laser‐vision system. With observation from an onboard CCD camera, a laser line projector fixed‐mounted below the camera is used for detecting the bottom shape of an object while an actively‐controlled upper laser line projector is utilized for 3D contour reconstruction. The mobile robot is driven to move around the object by a visual servoing and localization technique while the 3D contour of the object is being reconstructed based on the 2D image of the projected laser line. Asymptotical convergence of the closed‐loop system has been established. The proposed algorithm also has been used experimentally with a Dr Robot X80sv mobile robot upgraded with the low‐cost active laser‐vision system, thereby demonstrating effective real‐time performance. This seemingly novel laser‐vision robotic system can be applied further in unknown environments for obstacle avoidance and guidance control tasks. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

5.
《Real》1998,4(5):349-359
We have previously demonstrated that the performance of tracking algorithms can be improved by integrating information from multiple cues in a model-driven Bayesian reasoning framework. Here we extend our work to active vision tracking, with variable camera geometry. Many existent active tracking algorithms avoid the problem of variable camera geometry by tracking view independent features, such as corners and lines. However, the performance of algorithms based on those single features will greatly deteriorate in the presence of specularities and dense clutter. We show, by integrating multiple cues and updating the camera geometry on-line, that it is possible to track a complicated object moving arbitrarily in three-dimensional (3D) space.We use a four degree-of-freedom (4-DoF) binocular camera rig to track three focus features of an industrial object, whose complete model is known. The camera geometry is updated by using the rig control commands and kinematic model of the stereo head. The extrinsic parameters are further refined by interpolation from a previously sampled calibration of the head work space.The 2D target position estimates are obtained by a combination of blob detection, edge searching and gray-level matching, with the aid of model geometrical structure projection using current estimates of camera geometry. The information is represented in the form of a probability density distribution, and propagated in a Bayes Net. The Bayesian reasoning that is performed in the 2D image is coupled by the rigid model geometry constraint in 3D space.An αβ filter is used to smooth the tracking pursuit and to predict the position of the object in the next iteration of data acquisition. The solution of the inverse kinematic problem at the predicted position is used to control the position of the stereo head.Finally, experiments show that the target undertaking arbitrarily 3D motion can be successfully tracked in the presence of specularities and dense clutter.  相似文献   

6.
Structure from controlled motion   总被引:1,自引:0,他引:1  
This paper deals with the recovery of 3D information using a single mobile camera in the context of active vision. First, we propose a general revisited formulation of the structure-from-known-motion issue. Within the same formalism, we handle various kinds of 3D geometrical primitives such as points, lines, cylinders, spheres, etc. We also aim at minimizing effects of the different measurement errors which are involved in such a process. More precisely, we mathematically determine optimal camera configurations and motions which lead to a robust and accurate estimation of the 3D structure parameters. We apply the visual servoing approach to perform these camera motions using a control law in closed-loop with respect to visual data. Real-time experiments dealing with 3D structure estimation of points and cylinders are reported. They demonstrate that this active vision strategy can very significantly improve the estimation accuracy  相似文献   

7.
This paper addresses the problem of moving object reconstruction. Several methods have been published in the past 20 years including stereo reconstruction as well as multi-view factorization methods. In general, reconstruction algorithms compute the 3D structure of the object and the camera parameters in a non-optimal way, and then a nonlinear and numerical optimization algorithm refines the reconstructed camera parameters and 3D coordinates. In this paper, we propose an adjustment method which is the improved version of the well-known Tomasi–Kanade factorization method. The novelty, which yields the high speed of the algorithm, is that the core of the proposed method is an alternation and we give optimal solutions to the subproblems in the alternation. The improved method is discussed here and it is compared to the widely used bundle adjustment algorithm.  相似文献   

8.
曹煜  陈秀宏 《计算机工程》2012,38(5):224-226,229
针对自标定单幅图像的三维重建问题,提出一种基于侧影轮廓的图像三维重建方法。使用成角度平面镜装置拍摄目标物体,对所得图像进行边缘跟踪,得到闭合的轮廓曲线,利用满足平面镜成像原理的2对侧影轮廓计算灭点,根据灭点间的约束关系计算相机参数,在此基础上重建得到物体的三维模型。实验结果表明,该方法能快速重建逼真的三维模型。  相似文献   

9.
目的 深度信息的获取是3维重建、虚拟现实等应用的关键技术,基于单目视觉的深度信息获取是非接触式3维测量技术中成本最低、也是技术难度最大的手段。传统的单目方法多基于线性透视、纹理梯度、运动视差、聚焦散焦等深度线索来对深度信息进行求取,计算量大,对相机精度要求高,应用场景受限,本文基于固定光强的点光源在场景中的移动所带来的物体表面亮度的变化,提出一种简单快捷的单目深度提取方法。方法 首先根据体表面反射模型,得到光源照射下的物体表面的辐亮度,然后结合光度立体学推导物体表面辐亮度与摄像机图像亮度之间的关系,在得到此关系式后,设计实验,依据点光源移动所带来的图像亮度的变化对深度信息进行求解。结果 该算法在简单场景和一些日常场景下均取得了较好的恢复效果,深度估计值与实际深度值之间的误差小于10%。结论 本文方法通过光源移动带来的图像亮度变化估计深度信息,避免了复杂的相机标定过程,计算复杂度小,是一种全新的场景深度信息获取方法。  相似文献   

10.
In this paper, a continuous estimator strategy is utilized to asymptotically identify the six degree-of-freedom velocity of a moving object using a single fixed camera. The design of the estimator is facilitated by the fusion of homography-based techniques with Lyapunov design methods. Similar to the stereo vision paradigm, the proposed estimator utilizes different views of the object from a single camera to calculate 3D information from 2D images. In contrast to some of the previous work in this area, no explicit model is used to describe the movement of the object; rather, the estimator is constructed based on bounds on the object's velocity, acceleration, and jerk.  相似文献   

11.
12.
13.
Several non-rigid structure from motion methods have been proposed so far in order to recover both the motion and the non-rigid structure of an object. However, these monocular algorithms fail to give reliable 3D shape estimates when the overall rigid motion of the sequence is small. Aiming to overcome this limitation, in this paper we propose a novel approach for the 3D Euclidean reconstruction of deformable objects observed by an uncalibrated stereo rig. Using a stereo setup drastically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach is based on the following steps. Firstly, the stereo system is automatically calibrated and used to compute metric rigid structures from pairs of views. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points which have remained rigid throughout the sequence. The selected rigid points are then used to compute frame-wise shape registration and to robustly extract the motion parameters from frame to frame. Finally, all this information is used as initial estimates of a non-linear optimization which allows us to refine the initial solution and also to recover the non-rigid 3D model. Exhaustive results on synthetic and real data prove the performance of our proposal estimating motion, non-rigid models and stereo camera parameters even when there is no rigid motion in the original sequence.  相似文献   

14.
In this paper, we show how an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method can also be applied to many other binocular heads. In addition to the proposal and demonstration of a four-stage calibration process, there are three major contributions in this paper. First, we propose a motorized-focus lens (MFL) camera model which assumes constant nominal extrinsic parameters. The advantage of having constant extrinsic parameters is to having a simple head/eye relation. Second, a calibration method for the MFL camera model is proposed in this paper, which separates estimation of the image center and effective focal length from estimation of the camera orientation and position. This separation has been proved to be crucial; otherwise, estimates of camera parameters would be very noise-sensitive. Thirdly, we show that, once the parameters of the MFL camera model is calibrated, a nonlinear recursive least-square estimator can be used to refine all the 35 kinematic parameters. Real experiments have shown that the proposed method can achieve accuracy of one pixel prediction error and 0.2 pixel epipolar error, even when all the joints, including the left and right focus motors, are moved simultaneously. This accuracy is good enough for many 3D vision applications, such as navigation, object tracking and reconstruction  相似文献   

15.
Stratified self-calibration with the modulus constraint   总被引:10,自引:0,他引:10  
In computer vision and especially for 3D reconstruction, one of the key issues is the retrieval of the calibration parameters of the camera. These are needed to obtain metric information about the scene from the camera. Often these parameters are obtained through cumbersome calibration procedures. There is a way to avoid explicit calibration of the camera. Self-calibration is based on finding the set of calibration parameters which satisfy some constraints (e.g., constant calibration parameters). Several techniques have been proposed but it often proved difficult to reach a metric calibration at once. Therefore, in the paper, a stratified approach is proposed, which goes from projective through affine to metric. The key concept to achieve this is the modulus constraint. It allows retrieval of the affine calibration for constant intrinsic parameters. It is also suited for use in conjunction with scene knowledge. In addition, if the affine calibration is known, it can also be used to cope with a changing focal length  相似文献   

16.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   

17.
为了将二雏图纸在计算机中自动转换成三维模型并进行交互式显示和各种操作,必须首先从二维图纸中读取所有线条、标注等信息。将这些信息以合适的数据结构进行组织,然后根据这些数据识别相应的几何元素及其位置、尺寸等信息,即进行三维重建。该文对已有的多种三维重建算法进行分类和剖析,归纳出三维重建算法的主要技术难点,并给出了核心的内部数据结构。  相似文献   

18.
基于视觉增强现实系统的设计与实现   总被引:7,自引:0,他引:7  
针对目前增强现实领域尚少有人涉足的运动跟踪注册问题,提出了采用四个彩色标志点的光流场估计运动参数,结合刚体的运动特性以及投影透射模型确定运动物体与摄像机间相对位姿的算法。并把该算法用于以光学透射式头盔显示器为主的增强现实系统中。该系统结构简单、轻便、实用性强、易于实现。一般情况下只需四个平面标志物就可实现运动物体的三维跟踪注册;工作范围大,甚至可以应用到室外的增强现实系统中;数值求解过程是线性过程,误差小,可以满足增强现实系统高精度三维注册的要求。  相似文献   

19.
本文给出了一种以空间不变量的数据来计算摄象机外部参数的方法.空间透视不变量是指在几何变换中如投影或改变观察点时保持不变的形状描述.由于它可以得到一个相对于外界来讲独立的物体景物的特征描述,故可以很广泛的应用到计算机视觉等方面.摄象机标定是确定摄象机摄取的2D图象信息及其3D实际景物的信息之间的变换关系,它包括内部参数和外部参数两个部分.内部参数表征的是摄象机的内部特征和光学特征参数,包括图象中心(Cx,Cy)坐标、图象尺度因子Sx、有效的焦距长度f和透镜的畸变失真系数K;外部参数表示的是摄象机的位置和方向在世界坐标中的坐标参数,它包括平移矩阵T和旋转矩阵R3×3,一般情况下可以写成一个扩展矩阵[RT]3×4.本文基于空间透视不变量的计算数据,给出了一种标定摄象机外部参数的方法,实验结果表明该方法具有很强的鲁棒性.  相似文献   

20.
立体视觉是计算机视觉领域的一个重要分支,经过了40多年的研究与发展,这门技术在许多领域发挥着越来越重要的作用。本文系统地介绍了立体视觉研究的基本原理和内容,指出双目视觉技术的实现分为图象采集、摄象机标定、特征提取、立体匹配、三维重建以及后期处理几个步骤,分析了各个步骤的技术特点、存在的问题和解决方案,分析比较了各种技术的优缺点和适用范围,并着重介绍了特征提取和立体匹配这两个步骤,简要介绍了立体视觉技术的应用范围,总结分析了目前立体视觉研究所存在的主要问题和今后的发展方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号