首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
机器人立体视觉中摄像机的标定   总被引:1,自引:0,他引:1  
针对机器人立体视觉标定问题,提出一种介于传统标定法和自标定法之间的摄像机标定方法。与传统方法相比,将内参数、畸变参数与外参数分别标定。首先根据张氏平面标定法求出内参数;其次在考虑透镜径向畸变的基础上求出畸变参数;最后重点探讨在摄像机带有运动的情况下外参数的标定方法,避免了在原有标定环境下摄像机运动时需要重新标定的问题。实验依据传统标定的模板原理,结合双目立体视觉模型,验证了该种分离标定方法的可行性。  相似文献   

2.
计算机视觉通常采用针孔摄像机模型,但对于存在较大畸变的鱼眼镜头或广角镜头来说,会造成图像中同时存在透视变形和像差畸变。解决此问题的传统方法一般是采用标准网格板来标定摄像机参数,但需要较多的已知信息。为了进行精确的标定,提出了一种新的标定方法,该新方法不需要任何空间3维信息,即可用单幅普通图像来标定摄像机的像差系数及内参数,并可将畸变图像校正到相似变换。为了纠正像差畸变和计算消影点,该方法采用了直线的射影不变性,即共线点的投影仍然共线,平行直线束的投影相交于一点的性质;为了纠正透视变形,还采用了直线的相似不变性,即正交直线的夹角在相似变换中仍然保持正交的性质。用该方法标定的摄像机的参数包括像差系数、焦距、主点和纵横比,同时将图像纠正到了相似变换。用实验室图像和室外图像进行了仿真实验都得到了精确、可靠的结果。  相似文献   

3.
一种考虑二阶径向畸变的主动视觉自标定算法   总被引:1,自引:0,他引:1       下载免费PDF全文
基于主动视觉的摄像机自标定是摄像机标定的一个重要分支 ,由于普通的 CCD摄像机拍摄的像片存在着各种类型的几何畸变 ,其中以径向畸变最为严重 ,因此研究考虑径向畸变的自标定技术有着重要的意义 .为了使标定结果更精确 ,提出了一种考虑二阶径向畸变的内参数自标定方法 ,并通过推导考虑二阶径向畸变的极线几何约束 ,得出了如果能控制摄像机做 4次不在同一平面上的平移运动 ,则可以标定摄像机的内参数和二阶径向畸变系数的结论 .仿真实验结果表明 ,该算法精度很高 ,且具有一定的鲁棒性 ,可用于摄像机的标定 .  相似文献   

4.
摄像机自标定方法的研究与进展   总被引:61,自引:0,他引:61  
该文回顾了近几年来摄像机自标定技术的发展,并分类介绍了其中几种主要方法.同 传统标定方法相比,自标定方法不需要使用标定块,仅根据图像间图像点的对应关系就能估计 出摄像机内参数.文中重点介绍了透视模型下的几种重要的自标定方法,包括内参数恒定和内 参数可变两种情形;最后还简要介绍了几种非透视模型下的摄像机自标定方法.  相似文献   

5.
一种新的基于Kruppa方程的摄像机自标定方法   总被引:12,自引:0,他引:12  
主要针对传统的基于Kruppa方程的摄像机自标定算法的欠鲁棒性提出了一种新的二步式标定方法.在新标定方法中,首先利用传统的LM优化算法或遗传算法求解出Kruppa方程中通常需要被消去的比例因子,然后再利用线性方法完成对摄像机的标定.大量的仿真和真实图像实验表明,该方法可以大大提高基于Kruppa方程标定算法的鲁棒性及标定精度.  相似文献   

6.
花开胜  王林 《计算机工程》2012,38(15):264-267
针对道路交通事故现场图自动绘制中的摄像机标定问题,提出一种平面模板摄像机标定方法。在畸变较小的图像中心区域求取初值,采用基于内部映射牛顿法的子空间置信域法求解部分参数,引入畸变模型,通过直线特征约束求得剩余参数。实验结果表明,该方法能简化标定过程,减少运算量,提高计算速度。  相似文献   

7.
Self-Calibration of Rotating and Zooming Cameras   总被引:4,自引:0,他引:4  
In this paper we describe the theory and practice of self-calibration of cameras which are fixed in location and may freely rotate while changing their internal parameters by zooming. The basis of our approach is to make use of the so-called infinite homography constraint which relates the unknown calibration matrices to the computed inter-image homographies. In order for the calibration to be possible some constraints must be placed on the internal parameters of the camera.We present various self-calibration methods. First an iterative non-linear method is described which is very versatile in terms of the constraints that may be imposed on the camera calibration: each of the camera parameters may be assumed to be known, constant throughout the sequence but unknown, or free to vary. Secondly, we describe a fast linear method which works under the minimal assumption of zero camera skew or the more restrictive conditions of square pixels (zero skew and known aspect ratio) or known principal point. We show experimental results on both synthetic and real image sequences (where ground truth data was available) to assess the accuracy and the stability of the algorithms and to compare the result of applying different constraints on the camera parameters. We also derive an optimal Maximum Likelihood estimator for the calibration and the motion parameters. Prior knowledge about the distribution of the estimated parameters (such as the location of the principal point) may also be incorporated via Maximum a Posteriori estimation.We then identify some near-ambiguities that arise under rotational motions showing that coupled changes of certain parameters are barely observable making them indistinguishable. Finally we study the negative effect of radial distortion in the self-calibration process and point out some possible solutions to it.An erratum to this article can be found at  相似文献   

8.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   

9.
Plane-based self-calibration aims at the computation of camera intrinsic parameters from homographies relating multiple views of the same unknown planar scene. This paper proposes a straightforward geometric statement of plane-based self-calibration, through the concept of metric rectification of images. A set of constraints is derived from a decomposition of metric rectification in terms of intrinsic parameters and planar scene orientation. These constraints are then solved using an optimization framework based on the minimization of a geometrically motivated cost function. The link with previous approaches is demonstrated and our method appears to be theoretically equivalent but conceptually simpler. Moreover, a solution dealing with radial distortion is introduced. Experimentally, the method is compared with plane-based calibration and very satisfactory results are obtained. Markerless self-calibration is demonstrated using an intensity-based estimation of the inter-image homographies.  相似文献   

10.
Straight lines have to be straight   总被引:18,自引:0,他引:18  
Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lenses, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid. Received: 27 December 1999 / Accepted: 8 November 2000  相似文献   

11.
Using Specific Displacements to Analyze Motion without Calibration   总被引:2,自引:2,他引:0  
Considering the field of un-calibrated image sequences and self-calibration, this paper analyzes the use of specific displacements (such as fixed axis rotation, pure translations,...) or specific sets of camera parameters. This allows to induce affine or metric constraints, which can lead to self-calibration and 3D reconstruction.A uniformed formalism for such models already developed in the literature plus some novel models are developed here. A hierarchy of special situations is described, in order to tailor the most appropriate camera model to either the actual robotic device supporting the camera, or to tailor the fact we only have a reduced set of data available.This visual motion perception module leads to the estimation of a minimal 3D parameterization of the retinal displacement for a monocular visual system without calibration, and leads to self-calibration and 3D dynamic analysis.The implementation of these equations is analyzed and experimented.  相似文献   

12.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

13.
一种基于非量测畸变校正的摄像机标定方法   总被引:4,自引:0,他引:4  
设计一种基于非量测畸变校正的摄像机标定方法.该方法利用单参数除式模型校正镜头畸变,根据直线透视投影保留同素性,通过拉凡格氏法(LM)优化标定出畸变模型系数和摄像机主点坐标,然后校正成像点,使其满足针孔模型映射关系.根据内参数的两个基本方程,线性求解剩余参数.实验表明,该方法在非量测标定过程具有较好的鲁棒性,且对比张正友标定方法,可在单幅标靶图像下进行标定,避免了模型内外参数耦合在一起,提高了标定效率.  相似文献   

14.
针对摄像机镜头畸变校正方法的简便和快速问题,设计了基于直线投影特征的校正方法。介绍了镜头主要畸变产生原因和畸变模型;给出直线三点在理想投影下的关系,确定了适应度函数,利用遗传算法得到了畸变参数组最优解。基于matlab软件编写校正程序,并进行了实验验证。实验表明,利用畸变参数组的最优解能够实现图像畸变校正,效果较好。该标定方法只需场景内有直线存在即可实现对摄像机镜头畸变参数校正,方法所需实验条件简单,程序简便,便于现场快速校正。  相似文献   

15.
在双线阵CCD的三维重建中,对线阵CCD相机的标定和镜头畸变校正是基础环节。提出了一种用于三维重建中的双线阵CCD标定及镜头畸变校正方法。根据左右相机间的单应性关系,以及线阵CCD的成像原理,将双目相机间的空间关系分解成姿态角与错切角的关系。通过靶图数据的拟合,对姿态角和镜头畸变进行校正,根据求出的错切角完成相机间的标定,实现对具有镜头畸变的双线阵CCD的标定。实验结果表明,标定和校正精度满足后续三维重建中图像匹配的需求。  相似文献   

16.
一种新的线性摄像机自标定方法   总被引:19,自引:2,他引:19  
李华  吴福朝  胡占义 《计算机学报》2000,23(11):1121-1129
提出了一种新的基于主动视觉系统的线性摄像机自定标方法。所谓基于主动视觉系统,是指摄像机固定在摄像机平台上以平摄像机平台的运动可以精确控制。该方法的主要特点是可以线性求解摄像机的所有5个内参数。据作者所知。文献中现有的方法仅能线性求解摄像机的4个由参数。当摄像机为完全的射影模型时,即当有畸变因子(skew factor)存在时,文献中的线性方法均不再适用。该方法的基本思想是控制摄像机做5组平面正交运动,利用图像中的极点(epipoles)信息来线性标定摄像机。同时,针对摄像机做平移运动时基本矩阵的特殊形式,该文提出了求基本矩阵(fundamental matrix)的2点算法。与8点算法相比较,2点算法大大提高了所求极点的精度和鲁棒性。另外,该文对临近奇异状态(即5组平面正交运动中,有两组或者多组运动平面平行)作了较为详尽的分析,并提出了解决临近奇异状态的策略,从而增强了该文算法的衫性。模拟图像和真实图像实验表明该文的自标定方法具有较高的鲁棒性和准确性。  相似文献   

17.
基于遗传算法的摄像机自标定方法   总被引:1,自引:0,他引:1  
摄像机标定是计算机视觉领域的关键技术,其中的自标定是只根据图像计算摄像机的内参数,其标定过程简单,适用性强。由于传统的用于摄像机自标定的Kruppa方程不仅需要计算基础矩阵,还要计算图像的极点,而图像的极点又不是固定不变的,且会导致计算结果的不稳定,为此,针对传统摄像机自标定方法的上述不足,利用遗传算法完成了Hartley新的Kruppa方程的摄像机自标定过程,以便将这个过程完全转化为通过代价函数最小化来求得摄像机的内参数,这就排除了极点的不稳定因素。实验结果表明,该方法是简单、有效的,可以作为一种通用的标定工具。  相似文献   

18.
This paper surveys zoom-lens calibration approaches, such as pattern-based calibration, self-calibration, and hybrid (or semiautomatic) calibration. We describe the characteristics and applications of various calibration methods employed in zoom-lens calibration and offer a novel classification model for zoom-lens calibration approaches in both single and stereo cameras. We elaborate on these calibration techniques to discuss their common characteristics and attributes. Finally, we present a comparative analysis of zoom-lens calibration approaches, highlighting the advantages and disadvantages of each approach. Furthermore, we compare the linear and nonlinear camera models proposed for zoom-lens calibration and enlist the different techniques used to model the camera’s parameters for zoom (or focus) settings.  相似文献   

19.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

20.
稳定精确的摄像机标定方法   总被引:1,自引:0,他引:1  
在Tsai两步法的基础上提出了一种更加稳定精确的摄像机标定方法。由于Tsai两步法中只考虑了摄像机镜头的径向畸变因素,所以为了提高摄像机的标定精度,在其基础上考虑了镜头的切向畸变情况。在求解摄像机参数的过程中,第一步同Tsai两步法采用最小二乘法求解线性方程得到外部参数,再利用最小二乘法求解关于畸变参数K1,K2,K3,K4的线性方程组,最终求得摄像机的内外所有参数。通过实验对该方法进行了验证。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号