Intelligent Safety Decision-Making for Autonomous Vehicle in Highway Environment

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13016))

Included in the following conference series:

Abstract

Safe driving policies is the key technology to realize the adaptive cruise control of autonomous vehicle in highway environment. In this paper, the reinforcement learning method is applied to autonomous driving’s decision-making. To solve the problem that present reinforcement learning methods are difficult to deal with the randomness and uncertainty in driving environment, a model-free method for analyzing the Lyapunov stability and H∞ performance is applied to Actor-Critic algorithm to improve the stability and robustness of reinforcement learning. The safety of taking an action is judged by setting a safety threshold, thus improving the safety of behavioral decisions. Our method also designs a set of reward functions to better meet the safety and efficiency of driving decisions in the highway environment. The results show that the method can provide safe driving strategies for driverless vehicles in both normal road conditions and environments with unexpected situations, enabling the vehicles to drive safely.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Volodymyr, M., Koray, K., David, S., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2019)

    Google Scholar 

  2. Saxena, D.M., Bae, S., Nakhaei, A., et al.: Driving in dense traffic with model-free reinforcement learning. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 5385–5392 (2020)

    Google Scholar 

  3. Kendall, A., et al.: Learning to drive in a day. In: IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 8248–8254 (2019)

    Google Scholar 

  4. Hoel, C.-J., Wolff, K., Laine, L.: Tactical decision-making in autonomous driving by reinforcement learning with uncertainty estimation. ar**v:2004.10439 (2020)

  5. Osband, I., Aslanides, J., Cassirer, A.: Randomized prior functions for deep reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 31, pp. 8617–8629 (2018)

    Google Scholar 

  6. Hoel, C.J., Wolff, K., Laine, L.: Automated speed and lane change decision making using deep reinforcement learning. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, pp. 2148–2155 (2018)

    Google Scholar 

  7. Minghao, H., Yuan, T., Lixian, Z., Jun, W., Wei, P.: H∞ model-free reinforcement learning with robust stability guarantee. In: NeurIPS 2019 Workshop on Robot Learning: Control and Interaction in the Real World (2019)

    Google Scholar 

  8. Andrew, B., Richard, S., Charles, A.: Neuron like elements that can solve difficult learning control problems. IEEE Trans. Syst. Man Cybern. 13(5), 834–846 (1983)

    Google Scholar 

  9. Lillicrap T P , Hunt J J , Pritzel A , et al. Continuous control with deep reinforcement learning. Comput. Sci. (2015)

    Google Scholar 

  10. Volodymyr, M., Adrià, P.B., Mehdi, M., Alex, G., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937 (2016)

    Google Scholar 

  11. Chen, Y., Dong, C., Palanisamy, P., et al.: Attention-based hierarchical deep reinforcement learning for lane change behaviors in autonomous driving. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2019)

    Google Scholar 

  12. Underwood, S., Bartz, D., Kade, A., Crawford, M.: Truck automation: testing and trusting the virtual driver. In: Meyer, G., Beiker, S. (eds.) Road Vehicle Automation 3. LNM, pp. 91–109. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40503-2_8

    Chapter  Google Scholar 

  13. Krau, S., Wagner P , Gawron C . Metastable states in a microscopic model of traffic flow. Physical review. Stat. Phys. 5(1997), 5597–5602 (1997). Plasmas, fluids, and related interdisciplinary topics

    Google Scholar 

  14. Erdmann, J.: Lane-changing model in SUMO. In: SUMO 2014. Proceedings of the SUMO2014 Modeling Mobility with Open Data, vol. 24, pp. 77–88 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongli Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, Z., Wang, Z., Cui, X., Zheng, C. (2021). Intelligent Safety Decision-Making for Autonomous Vehicle in Highway Environment. In: Liu, XJ., Nie, Z., Yu, J., **e, F., Song, R. (eds) Intelligent Robotics and Applications. ICIRA 2021. Lecture Notes in Computer Science(), vol 13016. Springer, Cham. https://doi.org/10.1007/978-3-030-89092-6_64

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89092-6_64

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89091-9

  • Online ISBN: 978-3-030-89092-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation