A Survey on Pedestrian Detection: Towards Integrating Vulnerable Road Users into Sensor Networks

  • Conference paper
  • First Online:
Real-time and Autonomous Systems 2022 (Real-Time 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 674))

Included in the following conference series:

  • 161 Accesses

Abstract

This paper provides an overview of fundamental and recent 2D and 3D pedestrian detection methods. It is part of the ongoing investigation on integrating vulnerable road users into sensor networks. Besides the sensor-specific object detection methods based on LIDAR sensors, RADAR sensors, thermal imaging cameras, RGBD cameras, and RGB cameras, a selection of sensor fusion methods is presented. Shown methods have been developed to increase traffic safety for vulnerable road users, which include pedestrians and cyclists.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 139.09
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 181.89
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. OECD: Road Safety Report Germany, International Transport Forum (2021)

    Google Scholar 

  2. European Parliament: Regulation (EC) No 78/2009 of the European Parliament and of the Council of 14 January 2009 on the Type-approval of Motor Vehicles with Regard to the Protection of Pedestrians and Other Vulnerable Road Users (2009)

    Google Scholar 

  3. You, S., Ji, Y., Liu, S., Mei, C., Yao, X., Feng, Y.: A thermal infrared pedestrian-detection method for edge computing devices (2022). https://doi.org/10.3390/s22176710

  4. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset (2013). https://doi.org/10.1177/0278364913491297

  5. Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection (2016). https://doi.org/10.1109/CVPR.2016.91

  6. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement, CoRR (2018). https://arxiv.org/abs/1804.02767

  7. Liu, W., et al.: SSD: single shot multibox detector (2016). https://doi.org/10.1007/978-3-319-46448-0_2

  8. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: to- wards real-time object detection with region proposal networks (2015). https://proceedings.neurips.cc/paper/2015/hash/14bfa6bb14875e45bba028a21ed38046-Abstract.html

  9. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN (2017). http://arxiv.org/abs/1703.06870

  10. Girshick, R.: Fast R-CNN (2015). https://doi.org/10.1109/ICCV.2015.169

  11. Lin, T., Goyal, P., Girshick, R.B., He, K., Dollár, P.: Focal loss for dense object detection (2020). https://doi.org/10.1109/TPAMI.2018.2858826

  12. Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum PointNets for 3D object detection from RGB-D data (2018). https://doi.org/10.1109/CVPR.2018.00102

  13. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation, CoRR (2016)

    Google Scholar 

  14. Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection (2018). https://doi.org/10.1109/CVPR.2018.00472

  15. Chen, X., Ma, H., Wan, J., Li, B., **a, T.: Multi-view 3D object detection network for autonomous driving (2017). https://doi.org/10.1109/CVPR.2017.691

  16. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space, CoRR (2017)

    Google Scholar 

  17. Shi, S., Wang, X., Li, H.: PointRCNN: 3D object proposal generation and detection from point cloud (2019). https://doi.org/10.1109/CVPR.2019.00086

  18. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: PointPillars: fast encoders for object detection from point clouds (2019). https://doi.org/10.1109/CVPR.2019.01298

  19. Deng, J., Shi, S., Li, P., Zhou, W., Zhang, Y., Li, H.: Voxel R-CNN: towards high performance voxel-based 3D object detection. CoRR (2020). https://arxiv.org/abs/2012.15712

  20. Li, J., et al.: P2VRCNN: point to voxel feature learning for 3D object detection from point clouds (2021). https://doi.org/10.1109/ACCESS.2021.3094562

  21. Shi, S., et al.: PV-RCNN: point- voxel feature set abstraction for 3D object detection, pp. 10526–10535 (2020). https://doi.org/10.1109/CVPR42600.2020.01054

  22. Shi, S., et al.: PVRCNN++: point-voxel feature set abstraction with local vector representation for 3D object detection, CoRR (2022). https://arxiv.org/abs/2102.00463

  23. Wang, K., Zhang, Z.: Point-voxel fusion for multimodal 3D detection (2022). https://doi.org/10.1109/IV51971.2022.9827226

  24. Cennamo, A., Kästner, F., Kummert, A.: Towards pedestrian detection in radar point clouds with PointNets, pp. 1–7 (2021). https://doi.org/10.1145/3459066.3459067

  25. Wang, Z., Jia, K.: Frustum ConvNet: sliding frustums to aggregate local point-wise features for amodal (2019). https://doi.org/10.1109/IROS40897.2019.8968513

  26. Ku, J., Mozifian, M., Lee, J., Harakeh, A., Waslander, S.L.: Joint 3D proposal generation and object detection from view aggregation (2018). https://doi.org/10.1109/IROS.2018.8594049

  27. Dimitrievski, M.D., Shopovska, I., Hamme, D.V., Veelaert, P., Philips, W.: Weakly supervised deep learning method for vulnerable road user detection in FMCW radar (2020). https://doi.org/10.1109/ITSC45102.2020.9294399

  28. Yang, B., Guo, R., Liang, M., Casas, S., Urtasun, R.: RadarNet: exploiting radar for robust perception of dynamic objects (2020). https://doi.org/10.1007/978-3-030-58523-5_29

  29. Nobis, F., Shafiei, E., Karle, P., Betz, J., Lienkamp, M.: Radar voxel fusion for 3D object detection (2021). https://doi.org/10.3390/app11125598

  30. Wang, L., Chen, T., Anklam, C., Goldluecke, B.: High dimensional frustum PointNet for 3D object detection from camera, LiDAR, and Radar (2020). https://doi.org/10.1109/IV47402.2020.9304655

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maximilian De Muirier .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

De Muirier, M., Pareigis, S., Tiedemann, T. (2023). A Survey on Pedestrian Detection: Towards Integrating Vulnerable Road Users into Sensor Networks. In: Unger, H., Schaible, M. (eds) Real-time and Autonomous Systems 2022. Real-Time 2022. Lecture Notes in Networks and Systems, vol 674. Springer, Cham. https://doi.org/10.1007/978-3-031-32700-1_10

Download citation

Publish with us

Policies and ethics

Navigation