首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
基于单幅图片的相机完全标定   总被引:1,自引:0,他引:1  
现有相机标定方法的标定过程比较繁琐,不利于标定相机的广泛使用。为此,从摄像机镜头畸变矫正着手,结合标定板信息及消失点约束,提出一种基于单张图片的相机标定方法。利用非线性迭代得到相机镜头的畸变系数,通过线性求解得出相机的内参,直接计算得到相机的外参,从而实现仅需拍摄单张标定板图片的相机完全标定。实验结果表明,该方法在标定板与视平面夹角小于45°的情况下均能成功标定,并且重投影误差小于0.3像素。  相似文献   

2.
朱明健  黄雪梅  苏新勇 《测控技术》2015,34(11):148-151
针对目前数字近景工业摄影测量中常用标定方法使用上的局限性,提出了一种基于方位圆的相机标定新方法.首先,使用带有5个方位圆的圆形标定板获取图像序列并进行预处理,利用椭圆检测准则剔除冗余信息,确定出所有标志点;然后,根据方位圆之间特殊的几何关系,从识别方位圆出发,完成标定板上所有标志点从物空间到像空间的自动匹配;最后,求解出相机内参数并结合高斯-牛顿法对内参数进行优化,从而进一步提高测量精度.提出的方法和理论通过在Matlab 2010中对所拍摄的一组图片进行实验验证,结果表明具有较好的鲁棒性,可以满足工业摄影测量的精度要求.  相似文献   

3.
无人机在灾后矿井的自主导航能力是其胜任抢险救灾任务的前提,而在未知三维空间的自主位姿估计技术是无人机自主导航的关键技术之一。目前基于视觉的位姿估计算法由于单目相机无法直接获取三维空间的深度信息且易受井下昏暗光线影响,导致位姿估计尺度模糊和定位性能较差,而基于激光的位姿估计算法由于激光雷达存在视角小、扫描图案不均匀及受限于矿井场景结构特征,导致位姿估计出现错误。针对上述问题,提出了一种基于视觉与激光融合的井下灾后救援无人机自主位姿估计算法。首先,通过井下无人机搭载的单目相机和激光雷达分别获取井下的图像数据和激光点云数据,对每帧矿井图像数据均匀提取ORB特征点,使用激光点云的深度信息对ORB特征点进行深度恢复,通过特征点的帧间匹配实现基于视觉的无人机位姿估计。其次,对每帧井下激光点云数据分别提取特征角点和特征平面点,通过特征点的帧间匹配实现基于激光的无人机位姿估计。然后,将视觉匹配误差函数和激光匹配误差函数置于同一位姿优化函数下,基于视觉与激光融合来估计井下无人机位姿。最后,通过视觉滑动窗口和激光局部地图引入历史帧数据,构建历史帧数据和最新估计位姿之间的误差函数,通过对误差函数的非线性优化...  相似文献   

4.
提出了一种基于RGB-D相机数据的同时定位与地图构建(SLAM)算法系统,实现对RGB-D数据的快速和准确构建.首先在RGB图中提取较好鲁棒性的SURF特征,结合使用快速最近邻近似(FLANN)来完成特征点匹配的方式,再使用基于改进的最小距离与随机采样一致性(RANSAC)组合的方法剔除误匹配,然后使用PNP求解相邻帧间的相机位姿变换关系,后端的优化使用G2O(general graph optimization)来优化全局位姿,并使用回环检测消除累计误差.实验证明该方法具有有效性和可行性,能够迅速、准确地构建出三维稠密地图.  相似文献   

5.
针对基于Time-of-Flight(TOF)相机的彩色目标三维重建需标定CCD相机与TOF相机联合系统的几何参数,在研究现有的基于彩色图像和TOF深度图像标定算法的基础上,提出了一种基于平面棋盘模板的标定方法。拍摄了固定在平面标定模板上的彩色棋盘图案在不同角度下的彩色图像和振幅图像,改进了Harris角点提取,根据棋盘格上角点与虚拟像点的共轭关系,建立了相机标定系统模型,利用Levenberg-Marquardt算法求解,进行了标定实验。获取了TOF与CCD相机内参数,并利用像平面之间的位姿关系估计两相机坐标系的相对姿态,最后进行联合优化,获取了相机之间的旋转矩阵与平移向量。实验结果表明,提出的算法优化了求解过程,提高了标定效率,能够获得较高的精度。  相似文献   

6.
智能驾驶系统的车载传感信息通常由激光雷达信息和相机信息融合而成,精确稳定的外参数标定是有效多源信息融合的基础。为提高感知系统鲁棒性,文章提出一种基于特征匹配的激光雷达和相机标定方法。首先,提出点云数据球心算法和图像数据椭圆算法提取特征点的点云三维坐标和像素二维坐标;接着,建立特征点在激光雷达坐标系和相机坐标系下的点对约束,构建非线性优化问题;最后,采用非线性优化算法优化激光雷达和相机的外参数。利用优化后的外参数将激光雷达点云投影至图像,结果显示,激光雷达和相机联合标定精度横向平均误差和纵向平均误差分别为3.06像素和1.19像素。文章提出的方法与livox_camera_lidar_calibration方法相比,平均投影误差减少了40.8%,误差方差减少了56.4%,其精度和鲁棒性明显优于后者。  相似文献   

7.
基于双目立体视觉的复杂背景下的牛体点云获取   总被引:2,自引:0,他引:2  
提出一种基于双目立体视觉的复杂背景下的牛体点云获取方法。在点云获取过程中,针对牛不是保持不动的实际情况,采用基于被动视觉技术的双目立体视觉方法;在相机标定过程中,进行单目标定获取相机内部参数,统一标定获取相机外部参数,针对牛体毛色及所处环境复杂的情况,采用基于贝叶斯的皮肤检测算法提取牛体图像,采用基于SIFT的特征点提取和匹配方法实现特征点的匹配,利用计算机视觉中的极线几何原理,剔除误匹配点,通过相机的成像模型求取牛体的三维点云。实验验证了该方法能够在复杂背景下获取牛体点云,并得到较好的点云数据,较好解决了双目立体视觉中相机标定和立体匹配两个较关键和困难的问题。  相似文献   

8.
刘世龙  马智亮 《图学学报》2022,43(4):633-640
为了获取钢筋骨架质量自动检查所需的高精度钢筋骨架整体点云,建立了基于结构光相机的钢筋骨架整体点云获取算法。首先,对结构光相机采集得到的多幅钢筋骨架图像进行三维重建,得到结构光相机的无量纲位姿。其次,根据无量纲位姿获取有量纲位姿。然后,计算这些有量纲位姿间精确的转换矩阵。接着,基于这些有量纲位姿及其两两之间的精确转换矩阵,使用图优化对这些有量纲位姿进行优化,以得到高精度的有量纲位姿。最后,基于高精度的有量纲位姿对齐结构光相机采集的所有点云,获取钢筋骨架整体点云。实验结果表明,该算法获取实际预制钢筋混凝土构件钢筋骨架整体点云的耗时约 10 min,且点云的误差约为 5 mm。钢筋骨架整体点云获取算法可以快速获取钢筋骨架整体点云,而且所得点云的精度较高,可以满足钢筋骨架质量自动检查的要求。  相似文献   

9.
目的 针对现有的Kinect传感器中彩色相机和深度相机标定尤其是深度相机标定精度差、效率低的现状,本文在现有的基于彩色图像和视差图像标定算法的基础上,提出一种快速、精确的改进算法。方法 用张正友标定法标定彩色相机,用泰勒公式化简深度相机中用于修正视差值的空间偏移量以简化由视差与深度的几何关系构建的视差畸变模型,并以该模型完成Kinect传感器的标定。结果 通过拍摄固定于标定平板上的标定棋盘在不同姿态下的彩色图像和视差图像,完成Kinect传感器的标定,获得彩色相机和深度相机的畸变参数及两相机之间的旋转和平移矩阵,标定时间为116 s,得到彩色相机的重投影误差为0.33,深度相机的重投影误差为0.798。结论 实验结果表明,该改进方法在保证标定精度的前提下,优化了求解过程,有效提高了标定效率。  相似文献   

10.
针对未标定相机的位姿估计问题,提出了一种焦距和位姿同时迭代的高精度位姿估计算法。现有的未标定相机的位姿估计算法是焦距和相机位姿单独求解,焦距估计精度较差。提出的算法首先通过现有算法得到相机焦距和位姿的初始参数;然后在正交迭代的基础上推导了焦距和位姿最小化函数,将焦距和位姿同时作为初始值进行迭代计算;最后得到高精度的焦距和位姿参数。仿真实验表明提出的算法在点数为10,噪声标准差为2的情况下,角度相对误差小于1%,平移相对误差小于4%,焦距相对误差小于3%;真实实验表明提出的算法与棋盘标定方法的精度相当。与现有算法相比,能够对未标定相机进行高精度的焦距和位姿估计。  相似文献   

11.
Recently, DeMenthon and Davis (1992, 1995) proposed a method for determining the pose of a 3-D object with respect to a camera from 3-D to 2-D point correspondences. The method consists of iteratively improving the pose computed with a weak perspective camera model to converge, at the limit, to a pose estimation computed with a perspective camera model. In this paper we give an algebraic derivation of DeMenthon and Davis' method and we show that it belongs to a larger class of methods where the perspective camera model is approximated either at zero order (weak perspective) or first order (paraperspective). We describe in detail an iterative paraperspective pose computation method for both non coplanar and coplanar object points. We analyse the convergence of these methods and we conclude that the iterative paraperspective method (proposed in this paper) has better convergence properties than the iterative weak perspective method. We introduce a simple way of taking into account the orthogonality constraint associated with the rotation matrix. We analyse the sensitivity to camera calibration errors and we define the optimal experimental setup with respect to imprecise camera calibration. We compare the results obtained with this method and with a non-linear optimization method.  相似文献   

12.
考虑到基于二次曲线这种几何基元的摄像机标定方法比基于点或直线的方法具有更好的鲁棒性,给出了一种新的基于共面圆的摄像机标定方法。该方法的主要特点是模板形式简单、易于制作,仅需任意分布的三个或三个以上共面圆,且不需要进行圆环定位;不需要模板与图像之间的匹配;也无需求解任何非线性方程组。从几何角度对算法进行形象描述,并从代数的角度给出了严格论证。模拟和真实图像实验表明,该算法精确度高,鲁棒性强,表现出了十分良好的实用性。  相似文献   

13.
《Real》1999,5(3):215-230
The problem of a real-time pose estimation between a 3D scene and a single camera is a fundamental task in most 3D computer vision and robotics applications such as object tracking, visual servoing, and virtual reality. In this paper we present two fast methods for estimating the 3D pose using 2D to 3D point and line correspondences. The first method is based on the iterative use of a weak perspective camera model and forms a generalization of DeMenthon's method (1995) which consists of determining the pose from point correspondences. In this method the pose is iteratively improved with a weak perspective camera model and at convergence the computed pose corresponds to the perspective camera model. The second method is based on the iterative use of a paraperspective camera model which is a first order approximation of perspective. We describe in detail these two methods for both non-planar and planar objects. Experiments involving synthetic data as well as real range data indicate the feasibility and robustness of these two methods. We analyse the convergence of these methods and we conclude that the iterative paraperspective method has better convergence properties than the iterative weak perspective method. We also introduce a non-linear optimization method for solving the pose problem.  相似文献   

14.
Visual tracking, as a popular computer vision technique, has a wide range of applications, such as camera pose estimation. Conventional methods for it are mostly based on vision only, which are complex for image processing due to the use of only one sensor. This paper proposes a novel sensor fusion algorithm fusing the data from the camera and the fiber-optic gyroscope. In this system, the camera acquires images and detects the object directly at the beginning of each tracking stage; while the relative motion between the camera and the object measured by the fiber-optic gyroscope can track the object coordinate so that it can improve the effectiveness of visual tracking. Therefore, the sensor fusion algorithm presented based on the tracking system can overcome the drawbacks of the two sensors and take advantage of the sensor fusion to track the object accurately. In addition, the computational complexity of our proposed algorithm is obviously lower compared with the existing approaches(86% reducing for a 0.5 min visual tracking). Experiment results show that this visual tracking system reduces the tracking error by 6.15% comparing with the conventional vision-only tracking scheme(edge detection), and our proposed sensor fusion algorithm can achieve a long-term tracking with the help of bias drift suppression calibration.  相似文献   

15.
Most coplanar calibration algorithms determine the initial camera parameters from a single image under the assumption that the principal point is known in advance. However, the camera orientations, the shifted principal point and the noise corrupted on images have an influence on the estimated initial camera parameters under the above assumption. This paper proposes a useful method to determine the initial camera parameters for coplanar calibration. The proposed method can determine the initial camera parameters from the single image, wherein the principal point is considered as a parameter. In our experiments, both synthetic and real images are used. The experimental results show that the proposed method provides both stable initial camera parameters and noise robustness for changes of camera orientations, noise levels and shifts of principal point.  相似文献   

16.
基于共面二点一线特征的单目视觉定位   总被引:1,自引:1,他引:0  
研究了根据点、线混合特征进行单目视觉定位问题,在给定物体坐标系中共面的两个特征点和一条特征直线的条件下,根据它们在像平面上的对应计算相机与物体之间的位姿参数。根据三个特征之间的几何位置关系,分两种情况给出问题求解的具体过程,最终将问题转换成求解一个二次方程问题,真实的工件定位实验验证了方法的有效性。该结果为应用单目视觉进行工件定位提供了一种新方法。  相似文献   

17.
目的 RGB-D相机的外参数可以被用来将相机坐标系下的点云转换到世界坐标系的点云,可以应用在3维场景重建、3维测量、机器人、目标检测等领域。 一般的标定方法利用标定物(比如棋盘)对RGB-D彩色相机的外参标定,但并未利用深度信息,故很难简化标定过程,因此,若充分利用深度信息,则极大地简化外参标定的流程。基于彩色图的标定方法,其标定的对象是深度传感器,然而,RGB-D相机大部分则应用基于深度传感器,而基于深度信息的标定方法则可以直接标定深度传感器的姿势。方法 首先将深度图转化为相机坐标系下的3维点云,利用MELSAC方法自动检测3维点云中的平面,根据地平面与世界坐标系的约束关系,遍历并筛选平面,直至得到地平面,利用地平面与相机坐标系的空间关系,最终计算出相机的外参数,即相机坐标系内的点与世界坐标系内的点的转换矩阵。结果 实验以棋盘的外参标定方法为基准,处理从PrimeSense相机所采集的RGB-D视频流,结果表明,外参标定平均侧倾角误差为-1.14°,平均俯仰角误差为4.57°,平均相机高度误差为3.96 cm。结论 该方法通过自动检测地平面,准确估计出相机的外参数,具有很强的自动化,此外,算法具有较高地并行性,进行并行优化后,具有实时性,可应用于自动估计机器人姿势。  相似文献   

18.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

19.
A vision sensor for the direct pose determination of a parallel manipulator can extend the manipulators capabilities in many aspects. The existing approaches to a solution to the pose problem fall into two distinct categories: analytical solutions and iterative solutions. In this paper, we present an efficient two-step solution for the pose determination of a parallel manipulator by combining the advantages of the above two methods. Four points in an image, vertices of a parallelogram in object space, and the correspondences among these four coplanar points are utilized in the proposed method. In the first step, four coplanar vertices of a parallelogram, which locate the parallel manipulator's surface, are detected, Subsequently, the pose of the parallel manipulator can be determined with two affine invariants. In the second step, an iterative method is introduced, and the results of the first step are passed to the current step being used as initial values of the iterative process. Also, an error matrix is established through seven error functions, that are produced by the depth estimation and the co-planarity of the four vertices. The proposed method has been experimentally validated by a calibration board and the Pan and Tilt platform, which is a parallel manipulator developed by Googol Technology Ltd. The proposed method is then compared with three other existing methods. Experimental results demonstrate that the proposed method has the advantage of lower computational cost. The accuracy and stability, which are the main concerns during the real-time applications, are also improved.  相似文献   

20.
Autonomous robot calibration using vision technology   总被引:2,自引:0,他引:2  
Yan  Hanqi   《Robotics and Computer》2007,23(4):436-446
Unlike the traditional robot calibration methods, which need external expensive calibration apparatus and elaborate setups to measure the 3D feature points in the reference frame, a vision-based self-calibration method for a serial robot manipulator, which only requires a ground-truth scale in the reference frame, is proposed in this paper. The proposed algorithm assumes that the camera is rigidly attached to the robot end-effector, which makes it possible to obtain the pose of the manipulator with the pose of the camera. By designing a manipulator movement trajectory, the camera poses can be estimated up to a scale factor at each configuration with the factorization method, where a nonlinear least-square algorithm is applied to improve its robustness. An efficient approach is proposed to estimate this scale factor. The great advantage of this self-calibration method is that only image sequences of a calibration object and a ground-truth length are needed, which makes the robot calibration procedure more autonomous in a dynamic manufacturing environment. Simulations and experimental studies on a PUMA 560 robot reveal the convenience and effectiveness of the proposed robot self-calibration approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号