Abstract
Autonomous driving simulations require highly realistic images. Our preliminary study found that when the CARLA Simulator image was made more like reality by using DCLGAN, the performance of the lane recognition model improved to levels comparable to real-world driving. It was also confirmed that the vehicle’s ability to return to the center of the lane after deviating from it improved significantly. However, there is currently no agreed-upon metric for quantitatively evaluating the realism of simulation images. To address this issue, based on the idea that FID (Fréchet Inception Distance) measures the feature vector distribution distance using a pre-trained model, this paper proposes a metric that measures the similarity of simulation road images using the attention map from the self-attention distillation process of ENet-SAD. Finally, this paper verified the suitability of the measurement method by applying it to the image of the CARLA map that implemented a real-world autonomous driving test road.
Seongjeong Park and **u Pahk contributed equally to this work
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jeon, H., et al.: CARLA simulator-based evaluation framework development of lane detection accuracy performance under sensor blockage caused by heavy rain for autonomous vehicle. IEEE Robot. Autom. Lett. 7(4), 9977–9984 (2022). https://doi.org/10.1109/LRA.2022.3192632
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: 1st Annual Conference on Robot Learning (CoRL) (2017)
Pahk, J., Shim, J., Baek, M., Lim, Y., Choi, G.: Effects of Sim2Real image translation via DCLGAN on lane kee** assist system in CARLA simulator. IEEE Access. https://doi.org/10.1109/ACCESS.2023.3262991
Hou, Y., Ma, Z., Liu, C., Loy, C.C.: Learning lightweight lane detection CNNS by self-attention distillation. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: 31st International Conference on Neural Information Processing Systems (NIPS) (2017)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626. Venice, Italy (2017). https://doi.org/10.1109/ICCV.2017.74
OpenCV. Histogram Comparison, OpenCV documentation, accessed [April 5, 2023]. https://docs.opencv.org/3.4/d8/dc8/tutorial_histogram_comparison.html
Acknowledgements
This work was supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (21AMDP-C162419-01) and also supported by the DGIST R&D Program of the Ministry of Science and ICT of Korea (23-IT-03).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Park, S., Pahk, J., Jahn, L.L.F., Lim, Y., An, J., Choi, G. (2024). A Study on Quantifying Sim2Real Image Gap in Autonomous Driving Simulations Using Lane Segmentation Attention Map Similarity. In: Lee, SG., An, J., Chong, N.Y., Strand, M., Kim, J.H. (eds) Intelligent Autonomous Systems 18. IAS 2023. Lecture Notes in Networks and Systems, vol 795. Springer, Cham. https://doi.org/10.1007/978-3-031-44851-5_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-44851-5_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44850-8
Online ISBN: 978-3-031-44851-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)