Abstract
High-definition map has become a vital cornerstone in the navigation of autonomous vehicles in complex traffic scenarios. Thus, the construction of high-definition maps has become crucial. Traditional methods relying on expensive map** vehicles equipped with high-end sensor equipment are not suitable for mass map construction because of the limitation imposed by its high cost. Hence, this paper proposes a new method to create a high-definition road semantics map using multi-vehicle sensor data. The proposed method implements crowdsourced point-based visual SLAM to align and combine the local maps derived by multiple vehicles. This allows users to modify the extraction process by using a more sophisticated neural network, thus achieving a more accurate detection result when compared with traditional binarization method. The resulting map consists of road marking points suitable for autonomous vehicle navigation and path-planning tasks. Finally, the method is evaluated on the real-world KAIST urban dataset and Shougang dataset to demonstrate the level of detail and accuracy of the proposed map with 0.369 m in map** errors in ideal condition.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig2_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig3_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig4_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig5_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig6_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig7_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig8_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig9_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig10_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig11_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig12_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs42154-021-00173-x/MediaObjects/42154_2021_173_Fig13_HTML.png)
Similar content being viewed by others
References
Revilloud, M., Doucet, E.: Lane markings-based relocalization on highway. In: 2019 IEEE Intelligent Transportation Systems Conference, Auckland, 27–30 Oct 2019, pp. 4061–4067 (2019). https://doi.org/10.1109/ITSC.2019.8917254
Seif, H.G., Hu, X.: Autonomous driving in the iCity: HD maps as a key challenge of the automotive industry. Engineering 2(2), 159–162 (2016). https://doi.org/10.1016/J.ENG.2016.02.010
Yang, W., Ai, T.: A method for extracting road boundary information from crowdsourcing vehicle GPS trajectories. Sensors, (Basel) 18(4), 1261(2018). https://doi.org/10.3390/s18041261
Liebner, M., Jain, D., Schauseil, J., Pannen, D., Hackeloer, A.: Crowdsourced HD map patches based on road model inference and graph-based slam. In: IEEE Intelligent Vehicles Symposium, Paris, 9-12 June 2019. 1https://doi.org/10.1109/IVS.2019.8813860
Kim, C., Cho, S., Sunwoo, M., Jo, K.: Crowd-sourced map** of new feature layer for high-definition map. Sensors (Switzerland) 18(12), 1–17 (2018). https://doi.org/10.3390/s18124172
Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M.: Visual simultaneous localization and map**: a survey. Artif. Intell. Rev. 43(1), 55–81 (2015). https://doi.org/10.1007/s10462-012-9365-8
Civera, J., Galvez-Lopez, D., Riazuelo, L., Tardos, J.D., Montiel, J.M.M.: Towards semantic SLAM using a monocular camera. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, 25–30 Sept 2011. https://doi.org/10.1109/iros.2011.6094648
Jeong, J., Cho, Y., Kim, A.: Road-SLAM : road marking based SLAM with lane-level accuracy. IEEE Intelligent Vehicles Symposium, Los Angeles, 11–14 June 2017. https://doi.org/10.1109/IVS.2017.7995958
Mur-Artal, R., Tardos, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017). https://doi.org/10.1109/TRO.2017.2705103
Mur-Artal, R., Tardós, J.D.: ORB-SLAM: tracking and map** recognizable features. Workshop on Multi VIew Geometry in Robotics (MVIGRO)—RSS 2014 (2014)
Mur-Artal, R., Montiel, J.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015). https://doi.org/10.1109/TRO.2015.2463671
Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 27–30 June 2016. https://doi.org/10.1109/CVPR.2016.445
Qin, T., Li, P., Shen, S.: VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 34(4), 1004–1020 (2017)
Qin, T., Shen, S.: Online temporal calibration for monocular visual-inertial systems. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Madrid, 1–5 Oct 2018. https://doi.org/10.1109/IROS.2018.8593603
McDonald, J., Kaess, M., Cadena, C., Neira, J., Leonard, J.J.: Real-time 6-DOF multi-session visual SLAM over large-scale environments. Robot. Autonom. Syst. 61(10), 1144–1158 (2013). https://doi.org/10.1016/j.robot.2012.08.008
Schneider, T., Dymczyk, M., Fehr, M., Egger, K., Lynen, S., Gilitschenski, I., Siegwart, R.: Maplab: an open framework for research in visual-inertial map** and localization. IEEE Robot. Autom. Lett. 3(3), 1418–1425 (2018). https://doi.org/10.1109/LRA.2018.2800113
Cieslewski, T., Choudhary, S., Scaramuzza, D.: Data-efficient decentralized visual SLAM. In: Proceedings—IEEE International Conference on Robotics and Automation Brisbane, 21–25 May 2018. https://doi.org/10.1109/ICRA.2018.8461155
Herb, M., Weiherer, T., Navab, N., Tombari, F.: Crowd-sourced semantic edge map** for autonomous vehicles. In: IEEE International Conference on Intelligent Robots and Systems, Montreal, 21–25 May 2019. https://doi.org/10.1109/IROS40897.2019.8968020
Das, A., Ijsselmuiden, J., Dubbelman, G.: Pose-graph based crowdsourced map** framework. In: 2020 IEEE 3rd Connected and Automated Vehicles Symposium, Victoria, 18 Nov–16 Dec 2020. https://doi.org/10.1109/CAVS51000.2020.9334622
Stoven-Dubois, A., Dziri, A., Leroy, B., Chapuis, R.: Graph optimization methods for large-scale crowdsourced map**. In: IEEE International Conference on Information Fusion, Rustenberg, 6–9 July 2020. https://doi.org/10.23919/fusion45008.2020.9190292
Vineet, V., Miksik, O., Lidegaard, M., Nießner, M., Golodetz, S., Prisacariu, V.A., Kähler, O., Murray, D.W., Izadi, S., Pérez, P., Torr, P.H.: Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction. In: IEEE International Conference on Robotics and Automation, Seattle, 26-30 May 2015. https://doi.org/10.1109/ICRA.2015.7138983
Yao, L., Chen, Q., Qin, C., Wu, H., Zhang, S.: Automatic extraction of road markings from mobile laser-point cloud using intensity data. Int. Arch. Photogrammet. Remote Sens. Spatial Inf. Sci. ISPRS Arch. 42(3), 2113–2119 (2018). https://doi.org/10.5194/isprs-archives-XLII-3-2113-2018
Soheilian, B., Paparoditis, N., Boldo, D.: 3D road marking reconstruction from street-level calibrated stereo pairs. ISPRS J. Photogrammet. Remote Sens. 65(4), 347–359 (2010). https://doi.org/10.1016/j.isprsjprs.2010.03.003
Pannen, D., Liebner, M., Hempel, W., Burgard, W.: How to keep HD maps for automated driving up to date. In: IEEE International Conference on Robotics and Automation, Paris, 31 May–31 Aug 2020 (2020)
Schreiber, M., Hellmund, A.M., Stiller, C.: Multi-drive feature association for automated map generation using low-cost sensor data. In: IEEE Intelligent Vehicles Symposium, Seoul, 28 June–1 July 2015. https://doi.org/10.1109/IVS.2015.7225837
Massow, K., Kwella, B., Pfeifer, N., Häusler, F., Pontow, J., Radusch, I., Hipp, J., Dölitzscher, F., Haueis, M.: Deriving HD maps for highly automated driving from vehicular probe data. In: IEEE Conference on Intelligent Transportation Systems, Rio de Janeiro, 1–4 Nov 2016. https://doi.org/10.1109/ITSC.2016.7795794
Dabeer, O., Ding, W., Gowaiker, R., Grzechnik, S.K., Lakshman, M.J., Lee, S., Reitmayr, G., Sharma, A., Somasundaram, K., Sukhavasi, R.T., Wu, X.: An end-to-end system for crowdsourced 3D maps for autonomous vehicles : the map** component. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, 24–28 Sept 2017. https://doi.org/10.1109/IROS.2017.8202218
Van Gansbeke, W., De Brabandere, B., Neven, D., Proesmans, M., Van Gool, L.: End-to-end lane detection through differentiable least-squares fitting. In: Proceedings—2019 International Conference on Computer Vision Workshop, Seoul, 27–28 Oct 2019. https://doi.org/10.1109/ICCVW.2019.00119
Bradley, D., Roth, G.: Adaptive thresholding using the integral image. J. Graph. Tools 12(2), 13–21 (2007). https://doi.org/10.1080/2151237x.2007.10129236
Wu, C: Towards linear-time incremental structure from motion. In: IEEE International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT) Stanford, 25–28 Oct 2013. https://doi.org/10.1109/3DV.2013.25
Jeong, J., Cho, Y., Shin, Y.S., Roh, H., Kim, A.: Complex urban LiDAR data set. 2018 IEEE International Conference on Robotics and Automation, Brisbane, 21–26 May 2018. https://doi.org/10.1109/ICRA.2018.8460834
Wen, T., **ao, Z., Wijaya, B., Jiang, K., Yang, M., Yang, D.: High precision vehicle localization based on tightly-coupled visual odometry and vector HD map. In: IEEE Intelligent Vehicles Symposium, Las Vegas, 23–26 June 2020. https://doi.org/10.1109/IV47402.2020.9304659
Acknowledgements
This work was supported in part by National Natural Science Foundation of China (U1864203 61773234 and 52102464), and Project Funded by China Postdoctoral Science Foundation (2019M660622), in part by the International Science and Technology Cooperation Program of China (2019YFE0100200).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Wijaya, B., Jiang, K., Yang, M. et al. Crowdsourced Road Semantics Map** Based on Pixel-Wise Confidence Level. Automot. Innov. 5, 43–56 (2022). https://doi.org/10.1007/s42154-021-00173-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42154-021-00173-x