Log in

Crowdsourced Road Semantics Map** Based on Pixel-Wise Confidence Level

  • Published:
Automotive Innovation Aims and scope Submit manuscript

Abstract

High-definition map has become a vital cornerstone in the navigation of autonomous vehicles in complex traffic scenarios. Thus, the construction of high-definition maps has become crucial. Traditional methods relying on expensive map** vehicles equipped with high-end sensor equipment are not suitable for mass map construction because of the limitation imposed by its high cost. Hence, this paper proposes a new method to create a high-definition road semantics map using multi-vehicle sensor data. The proposed method implements crowdsourced point-based visual SLAM to align and combine the local maps derived by multiple vehicles. This allows users to modify the extraction process by using a more sophisticated neural network, thus achieving a more accurate detection result when compared with traditional binarization method. The resulting map consists of road marking points suitable for autonomous vehicle navigation and path-planning tasks. Finally, the method is evaluated on the real-world KAIST urban dataset and Shougang dataset to demonstrate the level of detail and accuracy of the proposed map with 0.369 m in map** errors in ideal condition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Revilloud, M., Doucet, E.: Lane markings-based relocalization on highway. In: 2019 IEEE Intelligent Transportation Systems Conference, Auckland, 27–30 Oct 2019, pp. 4061–4067 (2019). https://doi.org/10.1109/ITSC.2019.8917254

  2. Seif, H.G., Hu, X.: Autonomous driving in the iCity: HD maps as a key challenge of the automotive industry. Engineering 2(2), 159–162 (2016). https://doi.org/10.1016/J.ENG.2016.02.010

    Article  Google Scholar 

  3. Yang, W., Ai, T.: A method for extracting road boundary information from crowdsourcing vehicle GPS trajectories. Sensors, (Basel) 18(4), 1261(2018). https://doi.org/10.3390/s18041261

  4. Liebner, M., Jain, D., Schauseil, J., Pannen, D., Hackeloer, A.: Crowdsourced HD map patches based on road model inference and graph-based slam. In: IEEE Intelligent Vehicles Symposium, Paris, 9-12 June 2019. 1https://doi.org/10.1109/IVS.2019.8813860

  5. Kim, C., Cho, S., Sunwoo, M., Jo, K.: Crowd-sourced map** of new feature layer for high-definition map. Sensors (Switzerland) 18(12), 1–17 (2018). https://doi.org/10.3390/s18124172

    Article  Google Scholar 

  6. Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M.: Visual simultaneous localization and map**: a survey. Artif. Intell. Rev. 43(1), 55–81 (2015). https://doi.org/10.1007/s10462-012-9365-8

    Article  Google Scholar 

  7. Civera, J., Galvez-Lopez, D., Riazuelo, L., Tardos, J.D., Montiel, J.M.M.: Towards semantic SLAM using a monocular camera. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, 25–30 Sept 2011. https://doi.org/10.1109/iros.2011.6094648

  8. Jeong, J., Cho, Y., Kim, A.: Road-SLAM : road marking based SLAM with lane-level accuracy. IEEE Intelligent Vehicles Symposium, Los Angeles, 11–14 June 2017. https://doi.org/10.1109/IVS.2017.7995958

  9. Mur-Artal, R., Tardos, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017). https://doi.org/10.1109/TRO.2017.2705103

    Article  Google Scholar 

  10. Mur-Artal, R., Tardós, J.D.: ORB-SLAM: tracking and map** recognizable features. Workshop on Multi VIew Geometry in Robotics (MVIGRO)—RSS 2014 (2014)

  11. Mur-Artal, R., Montiel, J.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015). https://doi.org/10.1109/TRO.2015.2463671

    Article  Google Scholar 

  12. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 27–30 June 2016. https://doi.org/10.1109/CVPR.2016.445

  13. Qin, T., Li, P., Shen, S.: VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 34(4), 1004–1020 (2017)

    Article  Google Scholar 

  14. Qin, T., Shen, S.: Online temporal calibration for monocular visual-inertial systems. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Madrid, 1–5 Oct 2018. https://doi.org/10.1109/IROS.2018.8593603

  15. McDonald, J., Kaess, M., Cadena, C., Neira, J., Leonard, J.J.: Real-time 6-DOF multi-session visual SLAM over large-scale environments. Robot. Autonom. Syst. 61(10), 1144–1158 (2013). https://doi.org/10.1016/j.robot.2012.08.008

    Article  Google Scholar 

  16. Schneider, T., Dymczyk, M., Fehr, M., Egger, K., Lynen, S., Gilitschenski, I., Siegwart, R.: Maplab: an open framework for research in visual-inertial map** and localization. IEEE Robot. Autom. Lett. 3(3), 1418–1425 (2018). https://doi.org/10.1109/LRA.2018.2800113

    Article  Google Scholar 

  17. Cieslewski, T., Choudhary, S., Scaramuzza, D.: Data-efficient decentralized visual SLAM. In: Proceedings—IEEE International Conference on Robotics and Automation Brisbane, 21–25 May 2018. https://doi.org/10.1109/ICRA.2018.8461155

  18. Herb, M., Weiherer, T., Navab, N., Tombari, F.: Crowd-sourced semantic edge map** for autonomous vehicles. In: IEEE International Conference on Intelligent Robots and Systems, Montreal, 21–25 May 2019. https://doi.org/10.1109/IROS40897.2019.8968020

  19. Das, A., Ijsselmuiden, J., Dubbelman, G.: Pose-graph based crowdsourced map** framework. In: 2020 IEEE 3rd Connected and Automated Vehicles Symposium, Victoria, 18 Nov–16 Dec 2020. https://doi.org/10.1109/CAVS51000.2020.9334622

  20. Stoven-Dubois, A., Dziri, A., Leroy, B., Chapuis, R.: Graph optimization methods for large-scale crowdsourced map**. In: IEEE International Conference on Information Fusion, Rustenberg, 6–9 July 2020. https://doi.org/10.23919/fusion45008.2020.9190292

  21. Vineet, V., Miksik, O., Lidegaard, M., Nießner, M., Golodetz, S., Prisacariu, V.A., Kähler, O., Murray, D.W., Izadi, S., Pérez, P., Torr, P.H.: Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction. In: IEEE International Conference on Robotics and Automation, Seattle, 26-30 May 2015. https://doi.org/10.1109/ICRA.2015.7138983

  22. Yao, L., Chen, Q., Qin, C., Wu, H., Zhang, S.: Automatic extraction of road markings from mobile laser-point cloud using intensity data. Int. Arch. Photogrammet. Remote Sens. Spatial Inf. Sci. ISPRS Arch. 42(3), 2113–2119 (2018). https://doi.org/10.5194/isprs-archives-XLII-3-2113-2018

    Article  Google Scholar 

  23. Soheilian, B., Paparoditis, N., Boldo, D.: 3D road marking reconstruction from street-level calibrated stereo pairs. ISPRS J. Photogrammet. Remote Sens. 65(4), 347–359 (2010). https://doi.org/10.1016/j.isprsjprs.2010.03.003

    Article  Google Scholar 

  24. Pannen, D., Liebner, M., Hempel, W., Burgard, W.: How to keep HD maps for automated driving up to date. In: IEEE International Conference on Robotics and Automation, Paris, 31 May–31 Aug 2020 (2020)

  25. Schreiber, M., Hellmund, A.M., Stiller, C.: Multi-drive feature association for automated map generation using low-cost sensor data. In: IEEE Intelligent Vehicles Symposium, Seoul, 28 June–1 July 2015. https://doi.org/10.1109/IVS.2015.7225837

  26. Massow, K., Kwella, B., Pfeifer, N., Häusler, F., Pontow, J., Radusch, I., Hipp, J., Dölitzscher, F., Haueis, M.: Deriving HD maps for highly automated driving from vehicular probe data. In: IEEE Conference on Intelligent Transportation Systems, Rio de Janeiro, 1–4 Nov 2016. https://doi.org/10.1109/ITSC.2016.7795794

  27. Dabeer, O., Ding, W., Gowaiker, R., Grzechnik, S.K., Lakshman, M.J., Lee, S., Reitmayr, G., Sharma, A., Somasundaram, K., Sukhavasi, R.T., Wu, X.: An end-to-end system for crowdsourced 3D maps for autonomous vehicles : the map** component. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, 24–28 Sept 2017. https://doi.org/10.1109/IROS.2017.8202218

  28. Van Gansbeke, W., De Brabandere, B., Neven, D., Proesmans, M., Van Gool, L.: End-to-end lane detection through differentiable least-squares fitting. In: Proceedings—2019 International Conference on Computer Vision Workshop, Seoul, 27–28 Oct 2019. https://doi.org/10.1109/ICCVW.2019.00119

  29. Bradley, D., Roth, G.: Adaptive thresholding using the integral image. J. Graph. Tools 12(2), 13–21 (2007). https://doi.org/10.1080/2151237x.2007.10129236

    Article  Google Scholar 

  30. Wu, C: Towards linear-time incremental structure from motion. In: IEEE International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT) Stanford, 25–28 Oct 2013. https://doi.org/10.1109/3DV.2013.25

  31. Jeong, J., Cho, Y., Shin, Y.S., Roh, H., Kim, A.: Complex urban LiDAR data set. 2018 IEEE International Conference on Robotics and Automation, Brisbane, 21–26 May 2018. https://doi.org/10.1109/ICRA.2018.8460834

  32. Wen, T., **ao, Z., Wijaya, B., Jiang, K., Yang, M., Yang, D.: High precision vehicle localization based on tightly-coupled visual odometry and vector HD map. In: IEEE Intelligent Vehicles Symposium, Las Vegas, 23–26 June 2020. https://doi.org/10.1109/IV47402.2020.9304659

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (U1864203 61773234 and 52102464), and Project Funded by China Postdoctoral Science Foundation (2019M660622), in part by the International Science and Technology Cooperation Program of China (2019YFE0100200).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Mengmeng Yang or Diange Yang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wijaya, B., Jiang, K., Yang, M. et al. Crowdsourced Road Semantics Map** Based on Pixel-Wise Confidence Level. Automot. Innov. 5, 43–56 (2022). https://doi.org/10.1007/s42154-021-00173-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42154-021-00173-x

Keywords

Navigation