首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
提出了一种摄像机的复合标定方法:基于共面点分步标定摄像机的初始内外参数;基于最小二乘法求解径向畸变参数,进而迭代优化。标定点的识别中,中心点的识别采用了Hough直线检测及直线求交法,避免通常的Hough椭圆检测法带来的测量误差。应用于汽车碰撞过程的定性和定量分析,结果表明了其有效性与稳定性。  相似文献   

2.
一种简单的基于共面的摄像机参数标定方法   总被引:1,自引:0,他引:1  
共面摄像机标定就是采用平面式模板来确定摄像机内外参数的过程,在此过程中图像像素和二维特征点是已知的。对于共面标定提出了一种简单及有效的方法去标定摄像机参数。即使用帧缓存中的计算机阵列图像直接来标定参数。首先采用预标定的方法标定出图像中心位置,然后根据帧存图像坐标和世界坐标之间的对应关系使用正交矩阵的约束条件来求解,在此算法中假设尺度因子为1,并且不考虑透镜畸变。所提出的算法用数字仿真图像及真实的图像检验。结果显示,所提出的算法具有较好的精度,是一种简单有效的标定方法。  相似文献   

3.
A unified approach to the linear camera calibration problem   总被引:4,自引:0,他引:4  
The camera calibration process relates camera system measurements (pixels) to known reference points in a three-dimensional world coordinate system. The calibration process is viewed as consisting of two independent phases: the first is removing geometrical camera distortion so that rectangular calibration grids are straightened in the image plane, and the second is using a linear affine transformation as a map between the rectified camera coordinates and the geometrically projected coordinates on the image plane of known reference points. Phase one is camera-dependent, and in some systems may be unnecessary. Phase two is concerned with a generic model that includes 12 extrinsic variables and up to five intrinsic parameters. General methods handling additional constraints on the intrinsic variables in a manner consistent with explicit satisfaction of all six constraints on the orthogonal rotation matrix are presented. The use of coplanar and noncoplanar calibration points is described  相似文献   

4.
目的 相机外参标定是ADAS(advanced driver-assistance systems)等应用领域的关键环节。传统的相机外参标定方法通常依赖特定场景和特定标志物,无法实时实地进行动态标定。部分结合SLAM(simultaneous localization and mapping)或VIO(visual inertia odometry)的外参标定方法依赖于点特征匹配,且精度往往不高。针对ADAS应用,本文提出了一种相机地图匹配的外参自校正方法。方法 首先通过深度学习对图像中的车道线进行检测提取,数据筛选及后处理完成后,作为优化问题的输入;其次通过最近邻域解决车道线点关联,并在像平面内定义重投影误差;最后,通过梯度下降方法迭代求解最优的相机外参矩阵,使得像平面内检测车道线与地图车道线真值重投影匹配误差最小。结果 在开放道路上的测试车辆显示,本文方法经过多次迭代后收敛至正确的外参,其旋转角精度小于0.2°,平移精度小于0.2 m,对比基于消失点或VIO的标定方法(精度为2.2°及0.3 m),本文方法精度具备明显优势。同时,在相机外参动态改变时,所提出方法可迅速收敛至相机新外参。结论 本文方法不依赖于特定场景,支持实时迭代优化进行外参优化,有效提高了相机外参精确度,精度满足ADAS需求。  相似文献   

5.
Straight lines have to be straight   总被引:18,自引:0,他引:18  
Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lenses, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid. Received: 27 December 1999 / Accepted: 8 November 2000  相似文献   

6.
针对线阵相机特殊使用场景中所需要的高精度图像,对线阵相机进行高精度标定。提出一种基于光束法平差的双线阵相机标定方法。通过背景差分法获取线阵相机的特征点的像素坐标。再利用已有的直接线性变换方法和非线性优化方法求出相机的内参,外参,畸变参数后,将得到的初始参数与世界坐标作为待优化集合,利用LM法和光束法平差对该集合进行更进一步的优化,使得双线阵系统的重投影误差降到最低。实验表明,该方法与传统的线阵相机标定方法相比重投影误差降低了75.01%。  相似文献   

7.
In this paper, we systematically assess the performance of an automatic calibration chart detector. Through simulation we establish the optimal set of control parameters and the rate of successful detection as a function of pose. We validate the simulation results on real images taken from a camera mounted on a robot arm. The results confirm the utility of such simulation studies. The feedback obtained suggested a number of modifications for the chart detection system, which led to a significant improvement in performance. In particular, the chart design was changed to accommodate wider range and better stability in detection. Received: 16 December 1999 / Accepted: 15 October 2000  相似文献   

8.
Linear pose estimation from points or lines   总被引:10,自引:0,他引:10  
Estimation of camera pose from an image of n points or lines with known correspondence is a thoroughly studied problem in computer vision. Most solutions are iterative and depend on nonlinear optimization of some geometric constraint, either on the world coordinates or on the projections to the image plane. For real-time applications, we are interested in linear or closed-form solutions free of initialization. We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We then analyze the sensitivity of our solutions to image noise and show that the sensitivity analysis can be used as a conservative predictor of error for our algorithms. We present a number of simulations which compare our results to two other recent linear algorithms, as well as to iterative approaches. We conclude with tests on real imagery in an augmented reality setup.  相似文献   

9.
激光雷达的点云和相机的图像经常被融合应用在多个领域。准确的外参标定是融合两者信息的前提。点云特征提取是外参标定的关键步骤。但是点云的低分辨率和低质量会影响标定结果的精度。针对这些问题,提出一种基于边缘关联点云的激光雷达与相机外参标定方法。首先,利用双回波提取标定板边缘关联点云;然后,通过优化方法从边缘关联点云中提取出与实际标定板尺寸大小兼容的标定板角点;最后,将点云中角点和图像中角点匹配。用多点透视方法求解激光雷达与相机之间的外参。实验结果表明,该方法的重投影误差为1.602px,低于同类对比方法,验证了该方法的有效性与准确性。  相似文献   

10.
为实现基于投影仪和摄像机的结构光视觉系统连续扫描,需要计算投影仪投影的任意光平面与摄像机图像平面的空间位置关系,进而需要求取摄像机光心与投影仪光心之间的相对位置关系。求取摄像机的内参数,在标定板上选取四个角点作为特征点并利用摄像机内参数求取该四个特征点的外参数,从而知道四个特征点在摄像机坐标系中的坐标。利用投影仪自身参数求解特征点在投影仪坐标系中的坐标,从而计算出摄像机光心与投影仪光心之间的相对位置关系,实现结构光视觉标定。利用标定后的视觉系统,对标定板上的角点距离进行测量,最大相对误差为0.277%,表明该标定算法可以应用于基于投影仪和摄像机的结构光视觉系统。  相似文献   

11.
目的 RGB-D相机的外参数可以被用来将相机坐标系下的点云转换到世界坐标系的点云,可以应用在3维场景重建、3维测量、机器人、目标检测等领域。 一般的标定方法利用标定物(比如棋盘)对RGB-D彩色相机的外参标定,但并未利用深度信息,故很难简化标定过程,因此,若充分利用深度信息,则极大地简化外参标定的流程。基于彩色图的标定方法,其标定的对象是深度传感器,然而,RGB-D相机大部分则应用基于深度传感器,而基于深度信息的标定方法则可以直接标定深度传感器的姿势。方法 首先将深度图转化为相机坐标系下的3维点云,利用MELSAC方法自动检测3维点云中的平面,根据地平面与世界坐标系的约束关系,遍历并筛选平面,直至得到地平面,利用地平面与相机坐标系的空间关系,最终计算出相机的外参数,即相机坐标系内的点与世界坐标系内的点的转换矩阵。结果 实验以棋盘的外参标定方法为基准,处理从PrimeSense相机所采集的RGB-D视频流,结果表明,外参标定平均侧倾角误差为-1.14°,平均俯仰角误差为4.57°,平均相机高度误差为3.96 cm。结论 该方法通过自动检测地平面,准确估计出相机的外参数,具有很强的自动化,此外,算法具有较高地并行性,进行并行优化后,具有实时性,可应用于自动估计机器人姿势。  相似文献   

12.
对比目前常见的CCD摄像机标定技术,采用了一种基于消隐点的摄像机标定方法。该方法基于射影几何中的B双空间几何概念,在充分利用二维棋盘格标定图像中点、线和面固有的几何关系与属性的基础上,通过改进的角点提取与计算,首先针对摄像机内参数中的横向焦距与纵向焦距进行计算,然后结合非线性优化方法对参数进行优化计算,从而求解出待标定摄像机的全部内参数。该标定方法具有操作过程简单和实验误差小的特点。  相似文献   

13.
Calculation of camera projection matrix, also called camera calibration, is an essential task in many computer vision and 3D data processing applications. Calculation of projection matrix using vanishing points and vanishing lines is well suited in the literature; where the intersection of parallel lines (in 3D Euclidean space) when projected on the camera image plane (by a perspective transformation) is called vanishing point and the intersection of two vanishing points (in the image plane) is called vanishing line. The aim of this paper is to propose a new formulation for easily computing the projection matrix based on three orthogonal vanishing points. It can also be used to calculate the intrinsic and extrinsic camera parameters. The proposed method reaches to a closed-form solution by considering only two feasible constraints of zero-skewness in the internal camera matrix and having two corresponding points between the world and the image. A nonlinear optimization procedure is proposed to enhance the computed camera parameters, especially when the measurement error of input parameters or the skew factor are not negligible. The proposed method has been run on real and synthetic data for more precise evaluations. The provided experimental results demonstrate the superiority of the proposed method.  相似文献   

14.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

15.
In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.  相似文献   

16.
Is it possible to calibrate a camera in the air and then use the calibration results to infer a new calibration corresponding to the embedding of the camera in another fluid (possibly water)? This problem is dealt within the paper. It is important to avoid direct underwater calibration, as it is much more inconvenient for experiments than the usual (air) calibration by human workers. Optical laws that must be considered when using underwater cameras are investigated. Both theoretical and experimental point of views are described, and it is shown that relationships can be found between results of air and water (or any other isotropic fluid in which the camera can be submerged) calibration. Received: 22 April 2000 / Accepted: 13 May 2002 Correspondence to: J.M. Lavest  相似文献   

17.
机器视觉中摄像机标定Tsai两步法的分析与改进   总被引:6,自引:0,他引:6  
为提高摄像机标定精度,本文在分析Tsai两步标定法优缺点的基础上,提出了一种新的标定方法。该方法针对面阵CCD摄像机,建立新的综合畸变模型,并运用两步迭代法逐步逼近精确解,取消了Tsai两步法中对摄像机镜头只有径向畸变的限定,可用于多种复杂的镜头畸变情况。改进的方法只使用一平面标定板和单幅图象,操作简便,相比Tsai两步法中无法用平面标定板直接求解全部标定参数而言是一个突破。实验结果表明,改进方法速度快且精度较高,可广泛应用于机器视觉研究、工业三维测量等多个领域的摄像机标定。  相似文献   

18.
首先讨论了真实场景中的角结构几何约束,然后利用这种几何约束提出了一种新的相机自定标算法,利用单幅透视投影图像中的角结构计算出相机的焦距、平移向量和旋转矩阵的初始值,然后利用两幅图像中的几何结构对相机内外参数进行优化,由于在求取相机参数初始值的时候只用到了一幅图像,这样就避免了在相机定标过程中可能出现的临界运动序列(critical motions sequence),从而避免了临界运动序列引起的相机定标退化问题,提高了相机定标过程的鲁棒性,用两幅图像中的结构约束对初始值进行优化,进一步提高了结果的精确度,实验结果证明算法是强壮的。  相似文献   

19.
基于单幅图片的相机完全标定   总被引:1,自引:0,他引:1  
现有相机标定方法的标定过程比较繁琐,不利于标定相机的广泛使用。为此,从摄像机镜头畸变矫正着手,结合标定板信息及消失点约束,提出一种基于单张图片的相机标定方法。利用非线性迭代得到相机镜头的畸变系数,通过线性求解得出相机的内参,直接计算得到相机的外参,从而实现仅需拍摄单张标定板图片的相机完全标定。实验结果表明,该方法在标定板与视平面夹角小于45°的情况下均能成功标定,并且重投影误差小于0.3像素。  相似文献   

20.
基于改进粒子群优化算法的非线性摄像机标定   总被引:1,自引:0,他引:1       下载免费PDF全文
本文针对传统优化算法在摄像机标定中存在对初始值敏感、收敛性差、易陷入局部最优等缺点,研究了粒子群优化算法在非线性摄像机标定中的应用,给出了利用改进粒子群 优化算法进行摄像机参数标定的具体步骤。标定实验表明,基于该算法的摄像机标定方法可以克服传统算法的不足,是一种有效的标定方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号