摘 要:激光雷達的點云和相機的圖像經常被融合應用在多個領域。準確的外參標定是融合兩者信息的前提。點云特征提取是外參標定的關鍵步驟。但是點云的低分辨率和低質量會影響標定結果的精度。針對這些問題,提出一種基于邊緣關聯點云的激光雷達與相機外參標定方法。首先,利用雙回波提取標定板邊緣關聯點云;然后,通過優化方法從邊緣關聯點云中提取出與實際標定板尺寸大小兼容的標定板角點;最后,將點云中角點和圖像中角點匹配。用多點透視方法求解激光雷達與相機之間的外參。實驗結果表明,該方法的重投影誤差為1.602px,低于同類對比方法,驗證了該方法的有效性與準確性。
關鍵詞:特征提取;外參標定;雙回波
中圖分類號:TP242文獻標志碼:A
文章編號:1001-3695(2023)08-047-2537-06
doi:10.19734/j.issn.1001-3695.2022.12.0793
Extrinsic parameter calibration of LiDAR and camera
based on edge correlation point cloud
Feng Xin Li Jie Yu Chongsheng Qian Jiye He Ying
(1.School of Computer amp; Engineering,Chongqing University of Technology,Chongqing 400054,China;2.Chinese Academy of Sciences Chongqing Green Intelligent Technology Research Institute,Chongqing 400714,China;3.Chongqing Zhizhi Technology Co.,Ltd.,Chongqing 400025,China;4.State Grid Chongqing Electric Power Research Institute,Chongqing 400014,China)
Abstract:There are many fields choose to fuse LiDAR point cloud and camera images for applying.The accurate extrinsic calibration is a prerequisite for fusing their information.The point cloud feature extraction is a key step in extrinsic calibration.However,the low resolution and low quality of point clouds can affect the accuracy of calibration results.To address these problems,this paper proposed a LiDAR and camera extrinsic calibration method based on edge correlation point cloud.Firstly,it extracted the edge correlation point cloud of the calibration plate by using dual echoes.Then,it obtained the corner points from the edge correlation point cloud by the optimization method,where the corner points were compatible with the actual calibrated plate size.Finally,it matched the corner points in the point cloud with corner points in the image,and solved the extrinsic parameter between the LiDAR and the camera by the multi-point perspective method.The experimental results show that the reprojection error of the method is 1.602px,which is lower than that of comparison methods,verifying the effectiveness and accuracy of the method.
Key words:feature extraction;extrinsic parameter calibration;dual echo
0 引言
激光雷達和相機被廣泛應用在自動駕駛、高精建圖、三維重建、電力巡線、目標檢測等領域[1~7]。激光雷達能夠獲取高精度的三維信息,不易受光照和天氣等因素影響,抗干擾性強,但分辨率低,沒有色彩信息[8]。相機能提供高分辨率的色彩信息[9],但抗干擾性弱,易受光照影響。激光雷達和相機的互補性很強,融合兩者的數據,可以獲取更豐富的信息[10,11]。
進行數據融合的前提是需要通過外參標定求解激光雷達坐標系相對于相機坐標系的旋轉矩陣和平移向量。外部參數是旋轉矩陣和平移向量的統稱,簡稱外參。
激光雷達與相機的外參標定方法一般可以分為無目標的標定方法和基于目標的標定方法兩種。無目標的標定方法通常不需要特定的標定物,而是依靠自然場景下普遍存在的平面和邊緣等特征進行外參標定。Pandey等人[12]通過最大化激光雷達坐標系下三維點的反射強度與其在圖像投影點的灰度值之間的互信息建立目標函數,使用Barzilai-Borwein算法求解外參。Gong等人[13]利用自然場景下的三面體特征進行外參標定,該方法在低線束的激光雷達所采集的稀疏點云上效果不佳。Zhu等人[14]通過提取點云的反射強度圖和深度圖中的邊緣,與圖像的灰度圖中邊緣匹配,通過ICP(iterative closest point)求解最佳外參。Yuan等人[15]分析了激光束發散角對于邊緣特征提取的影響,提出了一種獲取點云連續邊緣的方法,通過連續邊緣特征構建約束方程求解外參。由于自然場景差異巨大,平面和邊緣等特征的質量難以控制,導致無目標的外參標定方法的精度也難以保證。
基于目標的標定方法是提取特定標定物的特征進行外參標定。這些標定物通常是具有明顯幾何特征且制作精度高的物體,例如棋盤格標定板、三角板、圓球等。覃興勝等人[16]通過Hough變換提取點云中矩形標定板邊緣,計算邊緣交點獲取標定板的角點,與圖像中角點匹配,優化求解外參。Zhou等人[17]利用平面和邊緣特征構建約束,只需要一個標定板姿態便可求解外參。Huang等人[18]提出一種從矩形標定板點云中估計標定板角點的優化方法,通過標定板角點優化求解外參。Cui等人[19]提出一種基于強度信息的棋盤格標定板角點提取方法,通過角點特征優化求解外參。Xu等人[20]通過RANSAC(random sample consensus)方法擬合點云中三角板的平面和邊緣特征,以獲取三角板的角點特征,通過角點優化求解外參。Lee等人[21]利用球體的球心建立對應關系求解外參。Beltrán等人[22]自制了帶有四個圓洞的標定板,通過提取深度不連續點獲取圓洞點云,然后擬合出點云中四個圓洞的圓心,與圖像中圓心匹配求解外參。由于標定板的制作精度高且易于提取特征,故與無目標方法相比,基于目標的標定方法精度更高。
從上述方法可以看出,點云特征提取是外參標定關鍵的一環,特征提取的準確程度會直接影響外參標定結果。其中,邊緣特征被經常應用在外參標定任務中。由于激光發散角會使邊緣點云出現局部膨脹問題,導致從點云中提取的邊緣特征質量無法保證,從而影響外參標定精度。所以,本文圍繞上述問題展開研究,提出一種基于邊緣關聯點云的激光雷達與相機外參標定方法。通過雙回波提取邊緣關聯點云,從邊緣關聯點云中提取準確的角點特征,基于角點特征優化求解外參,提高外參標定精度。
1 雙回波
激光雷達通過發射激光對周圍環境進行掃描,接收從物體上返回的激光,通過計算激光的往返時間產生點云信息,如圖1所示。一般稱返回的激光為回波。
正常情況下,一根激光束只會產生一個回波。但是,激光束在前景邊緣會產生雙回波。如圖2(a)所示,激光雷達發射激光束A,由于激光束本身存在發散角,導致激光束A落在物體上是一個面,使激光束A在前景邊緣被一分為二,分別落在前景邊緣和背景,產生了雙回波。
1.1 邊緣局部膨脹
對于雙回波的處理是由激光雷達的工作模式所決定。單回波模式是大多數激光雷達的默認工作模式,若激光雷達使用單回波模式,即只接收雙回波中反射強度最大的回波[23],會出現邊緣局部膨脹問題[15]。原因如下:
如圖2(a)所示,激光束B產生了雙回波且中心線落在背景上,若前景的反射強度大于背景,單回波模式會選擇保留前景邊緣的回波,產生一個前景邊緣點。由于中心線并不在前景上,所以該邊緣點是一個邊緣膨脹點。由于材質、反射角度等干擾原因,導致前景與背景之間反射強度的相對大小關系不確定,造成單回波模式只會接收到一部分邊緣膨脹點。所以,單回波模式輸出的點云會出現前景邊緣局部膨脹問題,邊緣呈凹凸不平的狀態,如圖2(b)所示。
1.2 邊緣整體膨脹
邊緣局部膨脹是單回波模式在接收雙回波時對膨脹點的部分保存所造成的,且局部膨脹的程度隨機性大,使外參標定任務中特征提取步驟的準確性無法保證。為了解決邊緣局部膨脹引起的邊緣點云質量下降問題,本文選擇使用激光雷達的雙回波模式接收回波信息,將局部邊緣膨脹轉變成邊緣整體膨脹,使邊緣變得平整,保證了邊緣點云質量,如圖2(c)所示。具體原因如下:
如圖3所示,將單回波模式和雙回波模式獲取的同一物體點云疊加在一起,可以發現雙回波模式的點云向外擴展了一圈。這是因為與單回波模式相比,當產生雙回波時,雙回波模式不會根據反射強度大小接收回波,而是將雙回波統統接收[23]。這種回波接收方式會使激光雷達接收所有前景的回波,其中包括了所有前景邊緣膨脹處的回波。所以,圖3中向外擴展的一圈點云是雙回波模式接收了單回波模式沒有接收到的膨脹點,從而將局部邊緣膨脹轉變成整體邊緣膨脹,使邊緣變得平整。
1.3 基于雙回波的邊緣關聯點云提取算法
當產生雙回波時,前景邊緣的回波可以產生兩種邊緣點,分別是存在于前景上的正常點和不存在于前景上的膨脹點,如圖4所示。由于這兩種邊緣點是雙回波的產物,都與邊緣有關聯,所以本文將這兩種邊緣點定義為邊緣關聯點,它們的點集為邊緣關聯點云。
2 基于邊緣關聯點云的外參標定方法
2.1 外參標定原理
2.2 提取標定板邊緣關聯點云
2.3 提取角點特征
2.3.1 基于邊緣關聯點云的角點提取方法
2.3.2 提取圖像中2D角點
2.4 外參求解
3 實驗與分析
4 結束語
本文提出了一種利用雙回波信息的激光雷達和相機外參標定方法,將雙回波信息加入到外參標定任務中,利用雙回波提取邊緣關聯點云;圍繞邊緣關聯點云特性設計角點提取方法,得到與實際標定板尺寸大小兼容的標定板角點。本文方法無須設計特定的標定板,實驗結果精度優于同類對比方法,點云投影與點云著色的結果也展現了本方法的準確性。但本方法仍有需要改進的地方,比如將激光雷達內部的運動信息、反射強度等信息加入到標定模型。
參考文獻:
[1]胡丹丹,于沛然,岳鳳發.面向室內退化環境的多傳感器建圖方法[J].計算機應用研究,2021,38(6):1800-1808.(Hu Dandan,Yu Peiran,Yue Fengfa.Multi-senor mapping method for indoor degraded environment[J].Application Research of Computers,2021,38(6):1800-1808.)
[2]Yeong D J,Velasco-Hernandez G,Barry J,et al.Sensor and sensor fusion technology in autonomous vehicles:a review[J].Sensors,2021,21(6):2140.
[3]Qu Yuanhao,Yang Minghao,Zhang Jiaqing,et al.An outline of multi-sensor fusion methods for mobile agents indoor navigation[J].Sensors,2021,21(5):1605.
[4]Xu Xiaobin ,Zhang Lei,Yang Jian,et al.A review of multi-sensor fusion SLAM systems based on 3D LiDAR[J].Remote Sensing,2022,14(12):2835.
[5]Maiese A,ManettI A C,Ciallella C,et al.The introduction of a new diagnostic tool in forensic pathology:LiDAR sensor for 3D autopsy documentation[J].Biosensors,2022,12(2):132.
[6]Paneque J,Valseca V,Martinez-De Dios J R,et al.Autonomous reactive LiDAR-based mapping for powerline inspection[C]//Proc of International Conference on Unmanned Aircraft Systems.Piscataway,NJ:IEEE Press,2022:962-971.
[7]Bai Xuyang,Hu Zeyu,Zhu Xinge,et al.Transfusion:robust LiDAR-camera fusion for 3D object detection with transformers[C]//Proc of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE Press,2022:1080-1089.
[8]Woods J O,Christian J A.LiDAR-based relative navigation with respect to non-cooperative objects[J].Acta Astronautica,2016,126:298-311.
[9]Sharma S.Comparative assessment of techniques for initial pose estimation using monocular vision[J].Acta Astronautica,2016,123:435-445.
[10]Zhong Huazan,Wang Hao,Wu Zhengrong,et al.A survey of LiDAR and camera fusion enhancement[J].Procedia Computer Science,2021,183:579-588.
[11]危雙豐,湯念,黃帥,等.基于建筑物輪廓的地面激光點云與影像匹配點云配準研究[J].計算機應用研究,2021,38(8):2515-2520.(Wei Shuangfeng,Tang Nian,Huang Shuai,et al.Research on registration of terrain laser scanning point cloud and image matching point cloud based on building footprint[J].Application Research of Computers,2021,38(8):2515-2520.)
[12]Pandey G,McBride J R,Savarese S,et al.Automatic targetless extrinsic calibration of a 3D LiDAR and camera by maximizing mutual information[C]//Proc of the 26th AAAI Conference on Artificial Intelligence.Palo Alto,CA:AAAI Press,2012:2053-2059.
[13]Gong X,Lin Y,Liu J.3D LiDAR-camera extrinsic calibration using an arbitrary trihedron[J].Sensors,2013,13(2):1902-1918.
[14]Zhu Yuewen,Zheng Chunran,Yuan Chongjian,et al.CamVox:a low-cost and accurate lidar-assisted visual slam system[C]//Proc of IEEE International Conference on Robotics and Automation.Piscataway,NJ:IEEE Press,2021:5049-5055.
[15]Yuan Chongjian,Liu Xiyuan,Hong Xiaoping,et al.Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments[J].IEEE Robotics and Automation Letters,2021,6(4):7517-7524.
[16]覃興勝,李曉歡,唐欣,等.基于標定板關鍵點的激光雷達與相機外參標定方法[J].激光與光電子學進展,2022,59(4):400-407.(Qin Xingsheng,Li Xiaohuan,Tang Xin,et al.Extrinsic calibration method of LiDAR and camera based on key points of calibration board[J].Laser amp; Optoelectronics Progress,2022,59(4):400-407.)
[17]Zhou Lipu,Li Zimo,Kaess M.Automatic extrinsic calibration of a camera and a 3D LiDAR using line and plane correspondences[C]//Proc of IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway,NJ:IEEE Press,2018:5562-5569.
[18]Huang J K,Grizzle J W.Improvements to target-based 3D LiDAR to camera calibration[J].IEEE Access,2020,8:134101-134110.
[19]Cui Jiahe,Niu Jianwei,Ouyang Zhenchao,et al.ACSC:automatic calibration for non-repetitive scanning solid-state LiDAR and camera systems[EB/OL].(2020-11-17)[2021-02-01].https://arxiv.org/abs/2011.08516.
[20]Xu Xiaobin,Zhang Lei,Yang Jian,et al.LiDAR-camera calibration method based on ranging statistical characteristics and improved RANSAC algorithm[J].Robotics and Autonomous Systems,2021,141:103776.
[21]Lee G M,Lee J H,Park S Y.Calibration of VLP-16 LiDAR and multi-view cameras using a ball for 360 degree 3D color map acquisition[C]//Proc of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems.Piscataway,NJ:IEEE Press,2017:64-69.
[22]Beltrán J,Guindel C,De La Escalera A,et al.Automatic extrinsic cali-bration method for LiDAR and camera sensor setups[J].IEEE Trans on Intelligent Transportation Systems,2022,23(10):17677-17689.
[23]Velodyne LiDAR Inc.VLP-16 user manual[EB/OL].(2019) .https://velodynelidar.com/wpcontent/uploads/2019/12/63-9243-Rev-E-VLP-16-User-Manual.pdf.
[24]Kang J,Doh N L.Full-DOF calibration of a rotating 2D LiDAR with a simple plane measurement[J].IEEE Trans on Robotics,2016,32(5):1245-1263.
[25]陳方圓,余崇圣,李夢,等.一種自旋轉激光雷達及其旋轉 軸標定方法:CN112859043A[P].2021-03-24.(Chen Fangyuan,Yu Chongsheng,Li Meng,et al.A self rotating LiDAR and its rotation axis calibration method:CN112859043A[P].2021-03-24.)
[26]Canny J.A computational approach to edge detection[J].IEEE Trans on Pattern Analysis and Machine Intelligence,1986,6:679-698.
[27]Lim J S.Two-dimensional signal and image processing[M].[S.l.]:Englewood Cliffs,1990.
[28]Von Gioi R G,Jakubowicz J,Morel J M,et al.LSD:a line segment detector[J].Image Processing on Line,2012,2:35-55.
[29]Harris C,Stephens M.A combined corner and edge detector[C]//Proc of the 4th Alvey Vision Conference.1988:147-151.
[30]Rosten E,Drummond T.Machine learning for high-speed corner detection[C]//Proc of European Conference on Computer Vision.Berlin:Springer,2006:430-443.
[31]Rosten E,Porter R,Drummond T.Faster and better:a machine lear-ning approach to corner detection[J].IEEE Trans on Pattern Ana-lysis and Machine Intelligence,2008,32(1):105-119.
[32]何穎,馬戎,李歲勞,等.變量投影框架下基于Wahba問題的多點透視問題求解算法[J].光學學報,2018,38(11):250-256.(He Ying,Ma Rong,Li Suilao,et al.Variable projection algorithm for perspective-n-point problem using Wahba’s problem[J].Acta Optica Sinica,2018,38(11):250-256.)