首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
为构建基于单目视觉的快速便捷式三维扫描系统,提出一种高精度的单目视觉几何投影的场景建模方法,并开发一种低成本高精度的三维扫描系统。首先,获取平面标定点的图像坐标,经投影变换将其转换为摄像坐标系下的三维空间位置坐标,分别建立平移台面和底座的三维空间平面方程;其次,通过移动平移台面求取同名标定点的空间坐标,求解平移台面的平移向量,并通过落在平移台面和底座上的激光线条求解激光平面;最后,提取图像中的激光光条中心点并将其变换为物体表面的三维点云数据。实验结果表明,投影变换求得的平面方程误差小于0.2%,扫描结果误差低于0.05mm。  相似文献   

2.
目的 RGB-D相机的外参数可以被用来将相机坐标系下的点云转换到世界坐标系的点云,可以应用在3维场景重建、3维测量、机器人、目标检测等领域。 一般的标定方法利用标定物(比如棋盘)对RGB-D彩色相机的外参标定,但并未利用深度信息,故很难简化标定过程,因此,若充分利用深度信息,则极大地简化外参标定的流程。基于彩色图的标定方法,其标定的对象是深度传感器,然而,RGB-D相机大部分则应用基于深度传感器,而基于深度信息的标定方法则可以直接标定深度传感器的姿势。方法 首先将深度图转化为相机坐标系下的3维点云,利用MELSAC方法自动检测3维点云中的平面,根据地平面与世界坐标系的约束关系,遍历并筛选平面,直至得到地平面,利用地平面与相机坐标系的空间关系,最终计算出相机的外参数,即相机坐标系内的点与世界坐标系内的点的转换矩阵。结果 实验以棋盘的外参标定方法为基准,处理从PrimeSense相机所采集的RGB-D视频流,结果表明,外参标定平均侧倾角误差为-1.14°,平均俯仰角误差为4.57°,平均相机高度误差为3.96 cm。结论 该方法通过自动检测地平面,准确估计出相机的外参数,具有很强的自动化,此外,算法具有较高地并行性,进行并行优化后,具有实时性,可应用于自动估计机器人姿势。  相似文献   

3.
《Advanced Robotics》2013,27(3):273-294
In order to search and rescue victims in rubble effectively, a three-dimensional (3D) map of the rubble is required. As a part of the national project on rescue robot systems, we are investigating a method for constructing a 3D map of rubble by teleoperated mobile robots. In this paper, we developed a laser range finder for 3D map building in rubble. The developed range finder consists of a ring laser beam module and an omnivison camera. The ring laser beam is generated by using a conical mirror and it is radiated toward the interior wall of the rubble around a mobile robot on which the laser range finder is mounted. The ominivison camera with a hyperbolic mirror can capture the reflected image of the ring laser on the rubble. Based on the triangulation principle, cross-section range data is obtained. Continuing this measurement as the mobile robot moves inside the rubble, a 3D map is obtained. We constructed a geometric model of the laser range finder for error analysis and obtained an optimal dimension of the laser range finder. Based on this analysis, we actually prototyped a range finder. Experimental results show that the actual measurement errors are well matched to the theoretical values. Using the prototyped laser range finder, a 3D map of rubble was actually built with reasonable accuracy.  相似文献   

4.
Stereovision is an effective technique to use a CCD video camera to determine the 3D position of a target object from two or more simultaneous views of the scene. Camera calibration is a central issue in finding the position of objects in a stereovision system. This is usually carried out by calibrating each camera independently, and then applying a geometric transformation of the external parameters to find the geometry of the stereo setting. After calibration, the distance of various target objects in the scene can be calculated with CCD video cameras, and recovering the 3D structure from 2D images becomes simpler. However, the process of camera calibration is complicated. Based on the ideal pinhole model of a camera, we describe formulas to calculate intrinsic parameters that specify the correct camera characteristics, and extrinsic parameters that describe the spatial relationship between the camera and the world coordinate system. A simple camera calibration method for our CCD video cameras and corresponding experiment results are also given. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

5.
针对针孔相机和三维激光雷达的外参标定问题,提出了一种新的方法:利用相机坐标系和激光坐标系下棋盘面的对应性,将外参标定转换为三维空间中旋转、缩放矩阵的求解问题.每帧数据可以提供4个约束,最少只需3帧数据可求解外参矩阵.方法原理简单易懂,直观明了地解释了失效退化原因.仿真实验与真实数据实验表明:提出的方法可以得到高精度的外参矩阵,且在采样帧数较少的情况下依旧可以获得很好的标定效果.  相似文献   

6.
传统的相机标定方法通常需要建立复杂3维标定块或高精度3维控制场,在实际应用中受到了一定的限制。本文采用平面控制格网作为标定块,根据相机的理想模型确定内方位元素,利用2维直接线性变换和共线方程分解出相机的外方位元素初值,采用改进的Hough变换算法检测标定图像中的格网直线并利用最小二乘法拟合出最佳直线,通过求直线的交点得到标定格网点的像坐标。最后利用自检校光线束法平差进行相机的精确标定。实际图像数据实验结果表明,主点和焦距的标定精度分别达到了0.2像素和0.3像素左右。可以满足高精度近景3维量测的要求。  相似文献   

7.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   

8.
A unified approach to the linear camera calibration problem   总被引:4,自引:0,他引:4  
The camera calibration process relates camera system measurements (pixels) to known reference points in a three-dimensional world coordinate system. The calibration process is viewed as consisting of two independent phases: the first is removing geometrical camera distortion so that rectangular calibration grids are straightened in the image plane, and the second is using a linear affine transformation as a map between the rectified camera coordinates and the geometrically projected coordinates on the image plane of known reference points. Phase one is camera-dependent, and in some systems may be unnecessary. Phase two is concerned with a generic model that includes 12 extrinsic variables and up to five intrinsic parameters. General methods handling additional constraints on the intrinsic variables in a manner consistent with explicit satisfaction of all six constraints on the orthogonal rotation matrix are presented. The use of coplanar and noncoplanar calibration points is described  相似文献   

9.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

10.
目的 相机外参标定是ADAS(advanced driver-assistance systems)等应用领域的关键环节。传统的相机外参标定方法通常依赖特定场景和特定标志物,无法实时实地进行动态标定。部分结合SLAM(simultaneous localization and mapping)或VIO(visual inertia odometry)的外参标定方法依赖于点特征匹配,且精度往往不高。针对ADAS应用,本文提出了一种相机地图匹配的外参自校正方法。方法 首先通过深度学习对图像中的车道线进行检测提取,数据筛选及后处理完成后,作为优化问题的输入;其次通过最近邻域解决车道线点关联,并在像平面内定义重投影误差;最后,通过梯度下降方法迭代求解最优的相机外参矩阵,使得像平面内检测车道线与地图车道线真值重投影匹配误差最小。结果 在开放道路上的测试车辆显示,本文方法经过多次迭代后收敛至正确的外参,其旋转角精度小于0.2°,平移精度小于0.2 m,对比基于消失点或VIO的标定方法(精度为2.2°及0.3 m),本文方法精度具备明显优势。同时,在相机外参动态改变时,所提出方法可迅速收敛至相机新外参。结论 本文方法不依赖于特定场景,支持实时迭代优化进行外参优化,有效提高了相机外参精确度,精度满足ADAS需求。  相似文献   

11.
Calculation of camera projection matrix, also called camera calibration, is an essential task in many computer vision and 3D data processing applications. Calculation of projection matrix using vanishing points and vanishing lines is well suited in the literature; where the intersection of parallel lines (in 3D Euclidean space) when projected on the camera image plane (by a perspective transformation) is called vanishing point and the intersection of two vanishing points (in the image plane) is called vanishing line. The aim of this paper is to propose a new formulation for easily computing the projection matrix based on three orthogonal vanishing points. It can also be used to calculate the intrinsic and extrinsic camera parameters. The proposed method reaches to a closed-form solution by considering only two feasible constraints of zero-skewness in the internal camera matrix and having two corresponding points between the world and the image. A nonlinear optimization procedure is proposed to enhance the computed camera parameters, especially when the measurement error of input parameters or the skew factor are not negligible. The proposed method has been run on real and synthetic data for more precise evaluations. The provided experimental results demonstrate the superiority of the proposed method.  相似文献   

12.
对于均匀环绕四周的多台摄像机,先对每个摄像机利用Zhang的平面模板标定方法单独获得内参;在外参计算方面,提出了一种多摄像机外参标定的新方法,该方法在ICP(iterative closest point)算法的基础上,结合了VR(view registration)算法的约束优化思想,先通过两两相邻像机间自由移动平面模板计算两者的位置关系,最后将每一个摄像机统一到一个世界坐标系。标定过程操作简便,易于实现。实验结果表明,本方法能够满足后续三维重建所需的精度要求。  相似文献   

13.
The laser and camera calibration problem has been the primary issue in the subject of fusion of 3D space and 2D image information. While the solution for the calibration is mathematically well defined as closed-form by least squares techniques, reliability of the solution can be degraded by uncertainties in measurements. To enhance the reliability of calibration results, we adopted the EM (Expectation-Maximization) algorithm as a noise removal process in the sensor system. The simulation and real experimental results show the effectiveness of our approaches.  相似文献   

14.
胡钊政  赵斌  李娜  夏克文 《自动化学报》2015,41(11):1951-1960
摄像机与激光测距仪(Camera and laser rangefinder, LRF)被广泛应用于机器人、移动道路测量车、无人驾驶等领域. 其中, 外参数标定是实现图像与LIDAR数据融合的第一步, 也是至关重要的一步. 本文提出一种新的基于最小解(Minimal solution) 外参数标定算法, 即摄像机与激光仅需对标定棋盘格采集三次数据. 本文首次提出虚拟三面体概念, 并以之构造透视三点问题(Perspective-three-point, P3P)用以计算激光与摄像机之间的坐标转换关系.相对于文献在对偶三维空间(Dual 3D space) 中构造的P3P问题, 本文直接在原始三维空间中构造P3P问题, 具有更直观的几何意义, 更利于对P3P问题进行求解与分析. 针对P3P问题多达八组解的问题, 本文还首次提出一种平面物成像区域约束方法从多解中获取真解, 使得最小解标定法具有更大的实用性与灵活性. 实验中分别利用模拟数据与真实数据对算法进行测试.算法结果表明, 在同等输入的条件下, 本文算法性能超过文献中的算法. 本文所提的平面物成像区域约束方法能从多解中计算出真解, 大大提高了最小解算法的实用性与灵活性.  相似文献   

15.
Algorithms for coplanar camera calibration   总被引:5,自引:0,他引:5  
Abstract. Coplanar camera calibration is the process of determining the extrinsic and intrinsic camera parameters from a given set of image and world points, when the world points lie on a two-dimensional plane. Noncoplanar calibration, on the other hand, involves world points that do not lie on a plane. While optimal solutions for both the camera-calibration procedures can be obtained by solving a set of constrained nonlinear optimization problems, there are significant structural differences between the two formulations. We investigate the computational and algorithmic implications of such underlying differences, and provide a set of efficient algorithms that are specifically tailored for the coplanar case. More specifically, we offer the following: (1) four algorithms for coplanar calibration that use linear or iterative linear methods to solve the underlying nonlinear optimization problem, and produce sub-optimal solutions. These algorithms are motivated by their computational efficiency and are useful for real-time low-cost systems. (2) Two optimal solutions for coplanar calibration, including one novel nonlinear algorithm. A constraint for the optimal estimation of extrinsic parameters is also given. (3) A Lyapunov type convergence analysis for the new nonlinear algorithm. We test the validity and performance of the calibration procedures with both synthetic and real images. The results consistently show significant improvements over less complete camera models. Received: 30 September 1998 / Accepted: 12 January 2000  相似文献   

16.
In this paper, we consider the problem of motion and shape estimation using a camera and a laser range finder. The object considered is a plane which is undergoing a Riccati motion. The camera observes features on the moving plane perspectively. The range-finder camera is capable of obtaining the range of the plane along a given "laser plane", which can either be kept fixed or can be altered in time. Finally, we assume that the identification is carried out as soon as the visual and range data is available, or after a suitable temporal integration. In each of these various cases, we derive to what extent the motion and shape parameters are identifiable and characterize the results as an orbit of a suitable group. The paper does not emphasize any specific choice of algorithm.  相似文献   

17.
Segment Based Camera Calibration   总被引:5,自引:2,他引:3       下载免费PDF全文
The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras^[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.  相似文献   

18.
近年来, 距离传感器与摄像机的组合系统标定在无人车环境感知中得到了广泛的研究与应用, 其中基于平面特征的方法简单易行而被广泛采用. 然而, 目前多数方法基于点匹配进行, 易错且鲁棒性较低. 本文提出了一种基于共面圆的距离传感器与相机的组合系统相对位姿估计方法. 该方法使用含有两个共面圆的标定板, 可以获取相机与标定板间的位姿, 以及距离传感器与标定板间的位姿. 此外, 移动标定板获取多组数据, 根据计算得到两个共面圆的圆心在距离传感器和相机下的坐标, 优化重投影误差与3D对应点之间的误差, 得到距离传感器与相机之间的位姿关系. 该方法不需要进行特征点的匹配, 利用射影不变性来获取相机与三维距离传感器的位姿. 仿真实验与真实数据实验结果表明, 本方法对噪声有较强的鲁棒性, 得到了精确的结果.  相似文献   

19.
改进了结构光三维测量系统的数学模型,将投影仪的内外部参数引入其中,并在该模型的基础上采用了一种投影仪标定的新方法.为了提高黑白摄像机在暗区的对比度用红/蓝棋盘代替了黑/白棋盘.将投影仪看作是摄像机的倒置,统一了标定摄像机和投影仪的方法.在3dMA×仿真环境下分别标定了摄像机和投影仪,标定实验的相对误差小于0.32%.利用已经标定的结构光三维测量系统进行重构仿真实验,测量误差小于0.136mm.  相似文献   

20.
《Graphical Models》2001,63(5):277-303
Camera calibration is the estimation of parameters (both intrinsic and extrinsic) associated with a camera being used for imaging. Given the world coordinates of a number of precisely placed points in a 3D space, camera calibration requires the measurement of the 2D projection of those scene points on the image plane. While the coordinates of the points in space can be known precisely, the image coordinates that are determined from the digital image are often inaccurate and hence noisy. In this paper, we look at the statistics of the behavior of the camera calibration parameters, which are important for stereo matching, when the image plane measurements are corrupted by noise. We derive analytically the behavior of the camera calibration matrix under noisy conditions and further show that the elements of the camera calibration matrix have a Gaussian distribution if the noise introduced into the measurement system is Gaussian. Under certain approximations we derive relationships between the camera calibration parameters and the noisy camera calibration matrix and compare it with Monte Carlo simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号