Accelerating Classification of Symbolic Road Markings (SRMs) in Autonomous Cars Through Computer Vision-Based Machine Learning

  • Chapter
  • First Online:
Internet of Unmanned Things (IoUT) and Mission-based Networking

Part of the book series: Internet of Things ((ITTCC))

  • 129 Accesses

Abstract

Road markings are an essential and integral part of safe driving where main landmarks are used to guide drivers. Develo** a robust road-marking interpretation system is challenging because of several aspects such as changes in light conditions, varying weather conditions, shadows, and faded signs and text. This chapter investigates the use of deep learning methods such as convolutional neural networks (CNNs) to classify symbolic road markings. Previous work in the literature has reported techniques which are predominantly based on feature extraction and template matching which restricts the use of such methods in real time. For autonomous vehicles, road markings need to be interpreted in real time to make timely decisions. This book chapter investigates and presents CNN-based image preprocessing methods to detect road markings for autonomous vehicles. Several CNN architectures were investigated with multiple convolutional, max pooling, and fully connected layers. This chapter will contribute by develo** a model with low computational requirements which is essential for autonomous vehicles. It will further explore state-of-the-art image preprocessing methods such as grayscaling, top-hat, and Otsu’s method. The performance of the proposed road-marking detector will be benchmarked using a public dataset with labeled road-marking images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now
Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 580–587. https://doi.org/10.1109/CVPR.2014.81

  2. R. Girshick, Fast R-CNN, in 2015 IEEE International Conference on Computer Vision (ICCV), (2015), pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169

  3. K. Ren, R.G. He, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  4. T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 936–944. https://doi.org/10.1109/CVPR.2017.106

  5. K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask R-CNN, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), pp. 2980–2988. https://doi.org/10.1109/ICCV.2017.322

  6. W. Liu, et al., SSD: Single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lect. Notes Comput. Sci, vol 9905, (2016). https://doi.org/10.1007/978-3-319-46448-0_2

  7. J. Redmon, A. Farhadi, YOLOv3: An incremental improvement. ar**v:1804.02767v1 (2018)., [online] Available: https://arxiv.org/abs/1804.02767

  8. Cambridge-Driving Labeled Video Database (CamVid), 2018, [online] Available: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/

  9. Daimler Urban Segmentation Dataset, 2019, [online] Available: http://www.6d-vision.com/scene-labeling

  10. The Málaga Stereo and Laser Urban Data Set—MRPT, 2018, [online] Available: https://www.mrpt.org/MalagaUrbanDataset

  11. A. Geiger, P. Lenz, C. Stiller, R. Urtasun, Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 32(11) (2013)

    Google Scholar 

  12. T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection. Proc. IEEE Int. Conf. Comput. Vis., 2999–3007 (2017)

    Google Scholar 

  13. J. Greenhalgh, M. Mirmehdi, Automatic detection and recognition of symbols and text on the road surface, in Pattern Recognition: Applications and Methods, ICPRAM 2015. Lecture Notes in Computer Science, ed. by A. Fred, M. De Marsico, M. Figueiredo, vol. 9493, (Springer, Cham, 2015). https://doi.org/10.1007/978-3-319-27677-9_8

    Chapter  Google Scholar 

  14. T.M. Hoang, S.H. Nam, K.R. Park, Enhanced detection and recognition of road markings based on adaptive region of interest and deep learning. IEEE Access 7, 109817–109832 (2019). https://doi.org/10.1109/ACCESS.2019.2933598

    Article  Google Scholar 

  15. R. Grompone von Gioi, J. Jakubowicz, J. Morel, G. Randall, LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010). https://doi.org/10.1109/TPAMI.2008.300

    Article  Google Scholar 

  16. J.Y. Lu, K. Li, L. Li, CannyLines: A parameter-free line segment detector, in 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, (2015), pp. 507–511. https://doi.org/10.1109/ICIP.2015.7350850

    Chapter  Google Scholar 

  17. T. Ahmad, D. Ilstrup, E. Emami, G. Bebis, Symbolic road marking recognition using convolutional neural networks, in 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, vol. 2017, pp. 1428–1433. https://doi.org/10.1109/IVS.2017.7995910

  18. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient based learning applied to document recognition. PIEEE 86(11), 2278–2324 (1998)

    Google Scholar 

  19. Z. Ouyang, J. Niu, Y. Liu, M. Guizani, Deep CNN-based real-time traffic light detector for self-driving vehicles. IEEE Trans. Mob. Comput. 19(2), 300–313 (2020). https://doi.org/10.1109/TMC.2019.2892451

  20. T. Wu, A. Ranganathan, A practical system for road marking detection and recognition, in 2012 IEEE Intelligent Vehicles Symposium, Alcala de Henares, (2012), pp. 25–30. https://doi.org/10.1109/IVS.2012.6232144

    Chapter  Google Scholar 

  21. D. Suarez-Mash, A. Ghani, C.H. See, S. Keates, H. Yu, Using deep neural networks to classify symbolic road markings for autonomous vehicles. EAI Endorsed Trans. Ind. Netw. Intell. Syst. 9(31), e2 (2022). https://doi.org/10.4108/eetinis.v9i31.985

    Article  Google Scholar 

  22. A. Ghani, R. Hodeify, C.H. See, S. Keates, D.-J. Lee, A. Bouridane, Computer vision-based Kidney’s (HK-2) damaged cells classification with reconfigurable hardware accelerator (FPGA). Electronics 11, 4234 (2022). https://doi.org/10.3390/electronics11244234

    Article  Google Scholar 

Download references

Acknowledgments

The author would like to thank the M.Sc. thesis student(s) for contributing to the experimental simulations conducted in the IoT lab within the School of Computing, Electronics and Maths, Coventry University, UK.

Disclaimer

This is an original work conducted and supervised by Dr. Arfan Ghani at Coventry University, UK. This book chapter contains at least 20–30% unpublished work. Readers and researchers of this chapter are referred to the author’s previously published work [21, 22], where some of the methods and techniques are further elaborated.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arfan Ghani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ghani, A., Iqbal, R. (2023). Accelerating Classification of Symbolic Road Markings (SRMs) in Autonomous Cars Through Computer Vision-Based Machine Learning. In: Kerrache, C.A., Calafate, C., Lakas, A., Lahby, M. (eds) Internet of Unmanned Things (IoUT) and Mission-based Networking. Internet of Things. Springer, Cham. https://doi.org/10.1007/978-3-031-33494-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33494-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33493-1

  • Online ISBN: 978-3-031-33494-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics

Navigation