首页 | 本学科首页   官方微博 | 高级检索  
     

基于梯形棋盘格的摄像机和激光雷达标定方法
引用本文:贾子永,任国全,李冬伟,程子阳.基于梯形棋盘格的摄像机和激光雷达标定方法[J].计算机应用,2017,37(7):2062-2066.
作者姓名:贾子永  任国全  李冬伟  程子阳
作者单位:军械工程学院 车辆与电气工程系, 石家庄 050003
摘    要:针对无人车(UGV)自主跟随目标车辆检测过程中需要对激光雷达(LiDAR)数据和摄像机图像进行信息融合的问题,提出了一种基于梯形棋盘格标定板对激光雷达和摄像机进行联合标定的方法。首先,利用激光雷达在梯形标定板上的扫描信息,获取激光雷达安装的俯仰角和安装高度;然后,通过梯形标定板上的黑白棋盘格标定出摄像机相对于车体的外参数;其次,结合激光雷达数据点与图像像素坐标之间的对应关系,对两个传感器进行联合标定;最后,综合激光雷达和摄像机的标定结果,对激光雷达数据和摄像机图像进行了像素级的数据融合。该方法只要让梯形标定板放置在车体前方,采集一次图像和激光雷达数据就可以满足整个标定过程,实现两种类型传感器的标定。实验结果表明,该标定方法的平均位置偏差为3.5691 pixel,折算精度为13 μm,标定精度高。同时从激光雷达数据和视觉图像融合的效果来看,所提方法有效地完成激光雷达与摄像机的空间对准,融合效果好,对运动中的物体体现出了强鲁棒性。

关 键 词:激光雷达  摄像机  联合标定  无人车  信息融合  
收稿时间:2016-12-21
修稿时间:2017-03-09

Joint calibration method of camera and LiDAR based on trapezoidal checkerboard
JIA Ziyong,REN Guoquan,LI Dongwei,CHENG Ziyang.Joint calibration method of camera and LiDAR based on trapezoidal checkerboard[J].journal of Computer Applications,2017,37(7):2062-2066.
Authors:JIA Ziyong  REN Guoquan  LI Dongwei  CHENG Ziyang
Affiliation:Department of Vehicles and Electrical Engineering, Ordnance Engineering College, Shijiazhuang Hebei 050003, China
Abstract:Aiming at the problem of information fusion between Light Detection And Ranging (LiDAR) data and camera images in the detection process of Unmanned Ground Vehicle (UGV) following the target vehicle, a method of joint calibration of LiDAR and camera based on a trapezoidal checkerboard was proposed. Firstly, by using the scanning information of the LiDAR in the trapezoidal calibration plate, the LiDAR installation angle and installation height were accessed. Secondly, the external parameters of the camera relative to the body were calibrated through the black and white checkerboard on the trapezoidal calibration plate. Then combining with the correspondence between the LiDAR data points and the pixel coordinates of the image, two sensors were jointly calibrated. Finally, integrating the LiDAR and the camera calibration results, the pixel data fusion of the LiDAR data and the camera image was carried out. As long as the trapezoidal calibration plate was placed in front of the vehicle body, the image and the LiDAR data were collected only once in the entire calibration process of two types of sensors. The experimental results show that the proposed method has high calibration accuracy with average position deviation of 3.5691 pixels (13 μm), and good fusion effect of LiDAR data and the visual image. It can effectively complete the spatial alignment of LiDAR and the camera, and is strongly robust to moving objects.
Keywords:Light Detection And Ranging (LiDAR)                                                                                                                        camera                                                                                                                        joint calibration                                                                                                                        Unmanned Ground Vehicle (UGV)                                                                                                                        information fusion
点击此处可从《计算机应用》浏览原始摘要信息
点击此处可从《计算机应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号