Video-Based Traffic Flow Analysis for Turning Volume Estimation at Signalized Intersections

  • Conference paper
  • First Online:
Intelligent Information and Database Systems (ACIIDS 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12034))

Included in the following conference series:

  • 1907 Accesses

Abstract

Traffic flow analysis in complex areas (e.g., intersections and roundabouts) plays an important part in the development of intelligent transportation systems. Among several methods for analyzing traffic flow, image and video processing has emerged as a potential approach to extract the movements of vehicles in urban areas. In this regard, this study develops a traffic flow analysis method, which focuses on extracting traffic information based on Video Surveillance (CCTV) for turning volume estimation at complex intersections, using advanced computer vision technologies. Specifically, state-of-the-art techniques such as Yolo and DeepSORT for the detection, tracking, and counting of vehicles have enveloped to estimate the road traffic density. Regarding the experiment, we collected data from CCTV in an urban area during one day to evaluate our method. The evaluation shows the proposing results in terms of detecting, tracking and counting vehicles with monocular videos.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://keras.io/.

  2. 2.

    http://www.skymaps.co.kr/.

  3. 3.

    https://github.com/qqwweee/keras-yolo3.

References

  1. Bui, K.H.N., Jung, J.J.: Cooperative game-theoretic approach to traffic flow optimization for multiple intersections. Comput. Electr. Eng. 71, 1012–1024 (2018). https://doi.org/10.1016/j.compeleceng.2017.10.016

    Article  Google Scholar 

  2. Bui, K.H.N., Jung, J.J.: Computational negotiation-based edge analytics for smart objects. Inf. Sci. 480, 222–236 (2019). https://doi.org/10.1016/j.ins.2018.12.046

    Article  MathSciNet  Google Scholar 

  3. Ciaparrone, G., Sánchez, F.L., Tabik, S., Troiano, L., Tagliaferri, R., Herrera, F.: Deep learning in video multi-object tracking: a survey. CoRR abs/1907.12740 (2019). http://arxiv.org/abs/1907.12740

  4. Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), pp. 379–387. Curran Associates Inc. (2016)

    Google Scholar 

  5. Datondji, S.R.E., Dupuis, Y., Subirats, P., Vasseur, P.: A survey of vision-based traffic monitoring of road intersections. IEEE Trans. Intell. Transp. Syst. 17(10), 2681–2698 (2016). https://doi.org/10.1109/TITS.2016.2530146

    Article  Google Scholar 

  6. Ghanim, M., Shaaban, K.: Estimating turning movements at signalized intersections using artificial neural networks. IEEE Trans. Intell. Transp. Syst. 20(5), 1828–1836 (2019). https://doi.org/10.1109/TITS.2018.2842147

    Article  Google Scholar 

  7. Girshick, R.B.: Fast R-CNN. In: Proceedings of the 2015 IEEE International Conference on Computer Vision(ICCV 2015), pp. 1440–1448. IEEE Computer Society (2015). https://doi.org/10.1109/ICCV.2015.169

  8. He, K., Gkioxari, G., Dollár, P., Girshick, R.B.: Mask R-CNN. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV 2017), pp. 2980–2988. IEEE Computer Society (2017). https://doi.org/10.1109/ICCV.2017.322

  9. Jiao, L., et al.: A survey of deep learning-based object detection. IEEE Access 7, 128837–128868 (2019). https://doi.org/10.1109/ACCESS.2019.2939201

    Article  Google Scholar 

  10. Lin, T., Goyal, P., Girshick, R.B., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV 2017), pp. 2999–3007. IEEE Computer Society (2017). https://doi.org/10.1109/ICCV.2017.324

  11. Liu, F., Zeng, Z., Jiang, R.: A video-based real-time adaptive vehicle-counting system for urban roads. PLoS ONE 12(221), e0186098 (2017). https://doi.org/10.1371/journal.pone.0186098

    Article  Google Scholar 

  12. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  13. Luo, H., **e, W., Wang, X., Zeng, W.: Detect or track: towards cost-effective video object detection/tracking. In: Proceedings of the 33th AAAI Conference on Artificial Intelligence (AAAI 2019), pp. 8803–8810. AAAI Press (2019). https://doi.org/10.1609/aaai.v33i01.33018803

  14. Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), pp. 779–788. IEEE Computer Society (2016). https://doi.org/10.1109/CVPR.2016.91

  15. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. CoRR abs/1804.02767 (2018). http://arxiv.org/abs/1804.02767

  16. Tang, Z., et al.: Cityflow: a city-scale benchmark for multi-target multi-camera vehicle tracking and re-identification. In: Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019), pp. 8797–8806. IEEE Computer Society (2019)

    Google Scholar 

  17. Tang, Z., Wang, G., **ao, H., Zheng, A., Hwang, J.: Single-camera and inter-camera vehicle tracking and 3d speed estimation based on fusion of visual and semantic features. In: Proceedings of the 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), pp. 108–115. IEEE Computer Society (2018). https://doi.org/10.1109/CVPRW.2018.00022

  18. Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. In: Proceedings of the 24th International Conference on Image Processing (ICIP 2017), pp. 3645–3649. IEEE (2017). https://doi.org/10.1109/ICIP.2017.8296962

  19. **a, Y., Shi, X., Song, G., Geng, Q., Liu, Y.: Towards improving quality of video-based vehicle counting method for traffic flow estimation. Sig. Process. 120, 672–681 (2016). https://doi.org/10.1016/j.sigpro.2014.10.035

    Article  Google Scholar 

  20. Yi, H., Bui, K.-H.N.: VDS data-based deep learning approach for traffic forecasting using LSTM network. In: Moura Oliveira, P., Novais, P., Reis, L.P. (eds.) EPIA 2019. LNCS (LNAI), vol. 11804, pp. 547–558. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30241-2_46

    Chapter  Google Scholar 

  21. Yi, H., Bui, K.H.N., Jung, H.: Implementing a deep learning framework for short term traffic flow prediction. In: Proceedings of the 9th International Conference on Web Intelligence, Mining and Semantics (WIMS 2019), pp. 7:1–7:8. ACM (2019). https://doi.org/10.1145/3326467.3326492

  22. Zhao, Z., Zheng, P., Xu, S., Wu, X.: Object detection with deep learning: a review. CoRR abs/1807.05511 (2018). http://arxiv.org/abs/1807.05511

  23. Zhong, Z., Yang, Z., Feng, W., Wu, W., Hu, Y., Liu, C.: Decision controller for object tracking with deep reinforcement learning. IEEE Access 7, 28069–28079 (2019). https://doi.org/10.1109/ACCESS.2019.2900476

    Article  Google Scholar 

Download references

Acknowledgment

This work was partly supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-00494, Development of deep learning-based urban traffic congestion prediction and signal control solution system) and Korea Institute of Science and Technology Information (KISTI) grant funded by the Korea government (MSIT) (K-19-L02-C07-S01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongsuk Yi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bui, KH.N., Yi, H., Jung, H., Cho, J. (2020). Video-Based Traffic Flow Analysis for Turning Volume Estimation at Signalized Intersections. In: Nguyen, N., Jearanaitanakij, K., Selamat, A., Trawiński, B., Chittayasothorn, S. (eds) Intelligent Information and Database Systems. ACIIDS 2020. Lecture Notes in Computer Science(), vol 12034. Springer, Cham. https://doi.org/10.1007/978-3-030-42058-1_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-42058-1_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-42057-4

  • Online ISBN: 978-3-030-42058-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation