Journal of Computer Applications ›› 2017, Vol. 37 ›› Issue (7): 2062-2066.DOI: 10.11772/j.issn.1001-9081.2017.07.2062

Previous Articles     Next Articles

Joint calibration method of camera and LiDAR based on trapezoidal checkerboard

JIA Ziyong, REN Guoquan, LI Dongwei, CHENG Ziyang   

  1. Department of Vehicles and Electrical Engineering, Ordnance Engineering College, Shijiazhuang Hebei 050003, China
  • Received:2016-12-21 Revised:2017-03-09 Online:2017-07-10 Published:2017-07-18

基于梯形棋盘格的摄像机和激光雷达标定方法

贾子永, 任国全, 李冬伟, 程子阳   

  1. 军械工程学院 车辆与电气工程系, 石家庄 050003
  • 通讯作者: 任国全
  • 作者简介:贾子永(1992-),男,安徽阜阳人,硕士研究生,主要研究方向:基于传感器信息融合的车辆检测;任国全(1974-),男,安徽太和人,教授,博士,主要研究方向:无人地面平台技术、油液分析;李冬伟(1979-),男,河北石家庄人,讲师,博士,主要研究方向:无人地面平台技术、环境感知;程子阳(1995-),男,河南驻马店人,硕士研究生,主要研究方向:自主驾驶控制技术。

Abstract: Aiming at the problem of information fusion between Light Detection And Ranging (LiDAR) data and camera images in the detection process of Unmanned Ground Vehicle (UGV) following the target vehicle, a method of joint calibration of LiDAR and camera based on a trapezoidal checkerboard was proposed. Firstly, by using the scanning information of the LiDAR in the trapezoidal calibration plate, the LiDAR installation angle and installation height were accessed. Secondly, the external parameters of the camera relative to the body were calibrated through the black and white checkerboard on the trapezoidal calibration plate. Then combining with the correspondence between the LiDAR data points and the pixel coordinates of the image, two sensors were jointly calibrated. Finally, integrating the LiDAR and the camera calibration results, the pixel data fusion of the LiDAR data and the camera image was carried out. As long as the trapezoidal calibration plate was placed in front of the vehicle body, the image and the LiDAR data were collected only once in the entire calibration process of two types of sensors. The experimental results show that the proposed method has high calibration accuracy with average position deviation of 3.5691 pixels (13 μm), and good fusion effect of LiDAR data and the visual image. It can effectively complete the spatial alignment of LiDAR and the camera, and is strongly robust to moving objects.

Key words: Light Detection And Ranging (LiDAR), camera, joint calibration, Unmanned Ground Vehicle (UGV), information fusion

摘要: 针对无人车(UGV)自主跟随目标车辆检测过程中需要对激光雷达(LiDAR)数据和摄像机图像进行信息融合的问题,提出了一种基于梯形棋盘格标定板对激光雷达和摄像机进行联合标定的方法。首先,利用激光雷达在梯形标定板上的扫描信息,获取激光雷达安装的俯仰角和安装高度;然后,通过梯形标定板上的黑白棋盘格标定出摄像机相对于车体的外参数;其次,结合激光雷达数据点与图像像素坐标之间的对应关系,对两个传感器进行联合标定;最后,综合激光雷达和摄像机的标定结果,对激光雷达数据和摄像机图像进行了像素级的数据融合。该方法只要让梯形标定板放置在车体前方,采集一次图像和激光雷达数据就可以满足整个标定过程,实现两种类型传感器的标定。实验结果表明,该标定方法的平均位置偏差为3.5691 pixel,折算精度为13 μm,标定精度高。同时从激光雷达数据和视觉图像融合的效果来看,所提方法有效地完成激光雷达与摄像机的空间对准,融合效果好,对运动中的物体体现出了强鲁棒性。

关键词: 激光雷达, 摄像机, 联合标定, 无人车, 信息融合

CLC Number: