鄧元望 蒲宏韜 華鑫斌 孫彪



摘? ?要:針對在復雜的工況下車道線檢測的魯棒性和實時性較差等問題,本文通過融合邊緣檢測與多顏色空間閾值分割結果,進行車道線特征點的提取. 結合車道線在鳥瞰圖中的位置特點,提出了基于DBSCAN二次聚類(Reclustering based on Density-Based Spatial Clustering of Application with Noise,RC-DBSCAN)的特征點聚類算法. 并以簇點是否進行二次聚類和Lab空間采樣簇點的平均灰度值為依據,進行車道線線型和顏色的識別. 使用最小二乘法對車道線進行擬合,通過基于可信區域的卡爾曼濾波算法對擬合后的車道線進行跟蹤. 最后在實際道路采集的視頻與公開的數據集中進行了實驗. 實驗表明,本文算法在復雜路況下對車道線檢測的魯棒性優于傳統聚類算法,實時性能夠滿足實際需求;在結構化道路上,對車道線類型的識別也具有很高的準確率.
關鍵詞:機器視覺;車道線檢測;特征融合;密度聚類;車道線類型識別;卡爾曼濾波
中圖分類號:TP391.41;U463.6 ? ? ? ? ? ? ? 文獻標志碼:A
Research on Lane Detection Based On RC-DBSCAN
DENG Yuanwang PU Hongtao HUA Xinbin SUN Biao
(College of Mechanical and Vehicle Engineering,Hunan University,Changsha 410082,China)
Absrtact:In view of the poor robustness and real-time performance of lane detection under complex working conditions,this paper extracts the feature points of lane line by fusing the results of edge detection and multi-color space threshold segmentation. Combined with the location characteristics of lane line in aerial view,a feature point reclustering algorithm based on RC-DBSCAN (Reclustering based on Density-Based Spatial Clustering of Application with Noise) is proposed. Based on whether the cluster points are clustered twice or not and the average gray value of the cluster points sampled in Lab space,the lane line shape and color are identified. The lane line is fitted by the least square method,and the fitted lane line is tracked by the Kalman filter algorithm based on the trusted region. Finally,the experiment is carried out in the real road video and public data set. Experimental results show that the robustness of the proposed algorithm is better than the traditional clustering algorithm in complex road conditions,and the real-time performance can meet the actual needs;on the structured road,the recognition of lane type also has? high accuracy.
Key words:machine vision;lane detection;feature fusion;density clustering;lane type recognition;Kalman filter
車道線檢測是輔助駕駛感知系統最重要的功能之一,提高車道線檢測的準確性,將有利于保障智能汽車的安全行駛和駕駛員的人身安全[1].
目前,常見的車道線檢測算法主要有基于特征檢測、基于模型的檢測和基于深度學習的檢測. 基于特征的檢測主要的特征包括了邊緣、紋理特征和顏色特征等[2-3]. 王家恩等提出了基于車道線寬度和邊緣點數量統計的邊緣檢測算法,能有效抑制噪聲的產生[4]. Chen 等通過Sobel算子進行邊緣檢測,并將圖片轉換到HSV空間,進行顏色特征的車道線特征提取[5]. 文獻[6]通過結合遠視場LSD直線檢測和遠視場的雙曲線模型匹配對車道線進行擬合,取得了較好的效果. Wang等利用密度聚類DBSCAN算法動態確定鄰域參數實現對車道線的提取,并使用拋物線模型對車道線進行擬合[7]. Ajaykumar 等使用K-means聚類算法對概率霍夫變換后的線段進行聚類,并利用輪廓系數確定最佳的聚類簇的數目,由于K-means算法的局限性,聚類效果容易受到影響[8]. He 等提出了基于點云卷積神經網絡的車道線檢測算法,在光照變化等復雜情況下,大大提高了檢測精度[9]. Neven等將車道線檢測問題轉化為實例分割問題,利用LaneNet網絡獲取每條車道線的像素級分割,從而提高了檢測精度[10].
在車道線的跟蹤領域,常見的跟蹤算法可以分為基于模型參數的跟蹤和基于感興趣區域的跟蹤.? Lee等通過上一幀圖像車道線的位置信息,動態確定感興趣區域,在此區域內對車道線進行追蹤,具有很好的實時性[11]. Wu 等利用卡爾曼濾波器對直線兩端坐標參數進行跟蹤,從而實現了對車道線的跟蹤[12].
針對相關文獻存在的魯棒性、準確性與實時性無法有效兼顧的問題,為了在滿足實時性的同時,更準確、全面地提取車道線信息,本文提出基于RC-DBSCAN的車道線檢測跟蹤與類型識別算法.
1? ?算法流程
本文在圖像預處理部分,通過逆透視變換和對應點提取車道線感興趣區域(Region of Interest,ROI),將Sobel算子邊緣檢測結果和基于顏色空間HSL和Lab 的最大類間方差法(OTSU)二值化結果進行數據融合,提取出車道線的邊緣特征點;采用RC-DBSCAN算法對特征點進行聚類;通過圖像直方圖峰值位置與簇點的質心位置排除路面干擾,并使用最小二乘法對車道線進行擬合;同時通過簇是否二次聚類和Lab顏色空間中的簇點的顏色值對車道線類別進行判定;最后通過卡爾曼濾波對車道線進行跟蹤,并劃定可信區域對卡爾曼濾波的追蹤結果進行判定和優化. 總體算法流程如圖1所示.
2? ?圖像預處理
2.1? ?圖片初處理
攝像頭采集到的圖片可分為三個區域:天空背景區域,車道線區域,車道線外背景區域. 為了排除背景干擾,根據R、G、B通道的值進行灰度化處理,灰度Gray的計算式如下:
Gray = 0.299×R + 0.587×G + 0.114×B? ? (1)
根據自車道范圍,劃定圖片的感興趣區域,本文選取圖片下方2/5左右的區域中的自車道線附近區域作為感興趣區域. 對圖像進行基于對應點的逆透視變換處理[13],得到車道線的鳥瞰圖. 圖2(a)為攝像頭采集的某車道線原圖,(b)為ROI區域的逆透視變換圖.
2.2? ?基于Sobel算子的車道線邊緣提取
利用Sobel算子通過模板,在x(水平),y(垂直)方向對圖片進行卷積操作,通過對遍歷點進行領域處理,達到提取邊緣特征的效果,見圖3.
2.3? ?基于HSL和Lab顏色空間的特征提取與融合
3? ?基于RC-DBSCAN的車道線提取
3.1? ?RC-DBSCAN算法
3.2? ?RC-DBSCAN與DBSCAN的檢測效果對比
3.3? ?車道線簇的提取與擬合
4? ?結構化道路的車道線類型識別
5? ?基于卡爾曼濾波的車道線跟蹤
6? ?實驗與分析
6.1? ?車道線檢測
6.2? ?車道線類型識別
7? ?結? ?論
在車輛行駛的復雜工況下,車道線的提取存在魯棒性和實時性不高的問題,本文在邊緣特征與顏色空間特征提取的基礎上,提出了RC-DBSCAN聚類算法和車道線類型識別算法,結合卡爾曼濾波,在彎道、路面干擾、隧道等復雜工況下進行了實車實驗. 結果表明,RC-DBSCAN算法相比于傳統的聚類算法具有更好的魯棒性和實時性,在復雜工況下的車道線檢測準確性可達95%,對于分辨率為1 920 ×1 080的圖片,每幀耗時平均約79 ms,具有較好的實時性,在結構化道路上,車道線類型識別的準確率達98%.
參考文獻
[1]? ? HU L,OU J,HUANG J,et al. A review of research on traffic conflicts based on intelligent vehicles[J]. IEEE Access,2020,8:24471—24483.
[2]? ? YOO J H,LEE S W,PARK S K,et al. A robust lane detection method based on vanishing point estimation using the relevance of line segments[J]. IEEE Transactions on Intelligent Transportation Systems,2017,18(12):3254—3266.
[3]? ? PIAO J,SHIN H. Robust hypothesis generation method using binary blob analysis for multi-lane detection[J]. IET Image Processing,2017,11(12):1210—1218.
[4]? ? 王家恩,陳無畏,汪明磊,等. 車輛輔助駕駛系統中的三車道檢測算法[J]. 汽車工程,2014,36(11):1378—1385.? ?WANG J E,CHEN W W,WANG M L,et al.. A three-lane detection algorithm for vehicle assistant driving system[J]. Automotive Engineering,2014,36(11):1378—1385. (In Chinese)
[5]? ? CHEN C,WANG J,CHANG H,et al. Lane detection of multi-visual-features fusion based on DS theory[C]//Proceedings of the 30th Chinese Control Conference. Yantai,China:IEEE,2011:3047—3052.
[6]? ? 曲峰. 基于視覺的結構化道路及障礙物檢測技術研究[D].長春:吉林大學,2019:24—33.QU F. Research on technologies of vision-based structured road and obstacle detection[D]. Changchun:Jilin University,2019:24—33.(In Chinese)
[7]? ? WANG J T,HONG W,GONG L. Lane detection algorithm based? on density clustering and RANSAC[C]//2018 Chinese? Control? And Decision Conference (CCDC). Shenyang,China:IEEE,2018:919—924.
[8]? ? AJAYKUMAR R,GUPTA A,MERCHANT P S N. Automated lane detection by K-means clustering:a machine learning approach[J]. Electronic Imaging,2016,2016(14):1—6.
[9]? ? HE B,AI R,YAN Y,et al. Lane marking detection based on convolution neural network from point clouds[C]//2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC). Rio de Janeiro,Brazil :IEEE,2016:2475—2480.
[10]? NEVEN D,DE BRABANDERE B,GEORGOULIS S,et al. Towards end-to-end lane detection:an instance segmentation approach[C]//2018 IEEE intelligent vehicles symposium (IV). Changshu,China :IEEE,2018:286—291.
[11]? LEE C,MOON J H. Robust lane detection and tracking for real-time applications[J]. IEEE Transactions on Intelligent Transportation Systems,2018,19(12):4043—4048.
[12]? WU P C,CHANG C Y,LIN C H. Lane-mark extraction for automobiles under complex conditions[J]. Pattern Recognition,2014,47(8):2756—2767.
[13]? ALY M. Real time detection of lane markers in urban streets[C]//2008 IEEE Intelligent Vehicles Symposium. Eindhoven,Netherlands :IEEE,2008:7—12.
[14]? LIU D,WANG Y,CHEN T,et al. Application of color filter adjustment and K-means clustering method in lane detection for self-driving cars[C]//2019 Third IEEE International Conference on Robotic Computing (IRC). Naples,Italy:IEEE,2019:153—158.
[15]? 覃雄臻,魯若宇,陳立明,等. 多場景車道線檢測與偏離預警方法研究[J]. 機械科學與技術,2020,39(9):1439—1449.QIN X Z,LU R Y,CHEN L M,et al. Research on multi-scene lane line detection and deviation warning method[J]. Mechanical Science and Technology for Aerospace Engineering,2020,39(9):1439—1449.(In Chinese)
[16]? KHAN K,REHMAN S U,AZIZ K,et al. DBSCAN:Past,present and future[C]//The fifth international conference on the applications of digital information and web technologies (ICADIWT 2014). Bangalore,India:IEEE,2014:232—238.
[17]? LI W,QU F,WANG Y,et al. A robust lane detection method based on hyperbolic model[J]. Soft Computing,2019,23(19):9161—9174.
[18]? MUTHALAGU R,BOLIMERA A,KALAICHELVI V. Lane detection technique based on perspective transformation and histogram analysis for self-driving cars[J]. Computers & Electrical Engineering,2020,85:106653.
[19]? 劉媛. 基于機器視覺的車道偏離預警系統關鍵算法研究[D].長沙:湖南大學,2013:47—51.?LIU Y. Research on key algorithms for vehicle lane departure warning system based on machine vision[D],Changsha:Hunan University,2013:47—51.(In Chinese)
[20]? 陳濤,張洪丹,陳東,等. 基于優先像素與卡爾曼濾波追蹤的車道線檢測[J]. 汽車工程,2016,38(2):200—205.CHEN T,ZHANG H D,CHEN D,et al. Lane detection based on high priority pixels and tracking by Kalman filte[J].Automotive Engineering,2016,38(2):200—205.
[21]? LEE D K,SHIN J S,JUNG J H,et al. Real-time lane detection and tracking system using simple filter and Kalman filter[C]//2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN). Milan :IEEE,2017:275—277.
[22]? 朱鴻宇,楊帆,高曉倩,等. 基于級聯霍夫變換的車道線快速檢測算法[J]. 計算機技術與發展,2021,31(1):88—93.ZHU H Y,YANG F,GAO X Q,et al. A fast lane detection algorithm based on cascade hough transform.computer technology and development,2021,31(1):88—93. (In Chinese)