Abstract
This paper provides an overview of fundamental and recent 2D and 3D pedestrian detection methods. It is part of the ongoing investigation on integrating vulnerable road users into sensor networks. Besides the sensor-specific object detection methods based on LIDAR sensors, RADAR sensors, thermal imaging cameras, RGBD cameras, and RGB cameras, a selection of sensor fusion methods is presented. Shown methods have been developed to increase traffic safety for vulnerable road users, which include pedestrians and cyclists.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
OECD: Road Safety Report Germany, International Transport Forum (2021)
European Parliament: Regulation (EC) No 78/2009 of the European Parliament and of the Council of 14 January 2009 on the Type-approval of Motor Vehicles with Regard to the Protection of Pedestrians and Other Vulnerable Road Users (2009)
You, S., Ji, Y., Liu, S., Mei, C., Yao, X., Feng, Y.: A thermal infrared pedestrian-detection method for edge computing devices (2022). https://doi.org/10.3390/s22176710
Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset (2013). https://doi.org/10.1177/0278364913491297
Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection (2016). https://doi.org/10.1109/CVPR.2016.91
Redmon, J., Farhadi, A.: Yolov3: an incremental improvement, CoRR (2018). https://arxiv.org/abs/1804.02767
Liu, W., et al.: SSD: single shot multibox detector (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: to- wards real-time object detection with region proposal networks (2015). https://proceedings.neurips.cc/paper/2015/hash/14bfa6bb14875e45bba028a21ed38046-Abstract.html
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN (2017). http://arxiv.org/abs/1703.06870
Girshick, R.: Fast R-CNN (2015). https://doi.org/10.1109/ICCV.2015.169
Lin, T., Goyal, P., Girshick, R.B., He, K., Dollár, P.: Focal loss for dense object detection (2020). https://doi.org/10.1109/TPAMI.2018.2858826
Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum PointNets for 3D object detection from RGB-D data (2018). https://doi.org/10.1109/CVPR.2018.00102
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation, CoRR (2016)
Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection (2018). https://doi.org/10.1109/CVPR.2018.00472
Chen, X., Ma, H., Wan, J., Li, B., **a, T.: Multi-view 3D object detection network for autonomous driving (2017). https://doi.org/10.1109/CVPR.2017.691
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space, CoRR (2017)
Shi, S., Wang, X., Li, H.: PointRCNN: 3D object proposal generation and detection from point cloud (2019). https://doi.org/10.1109/CVPR.2019.00086
Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: PointPillars: fast encoders for object detection from point clouds (2019). https://doi.org/10.1109/CVPR.2019.01298
Deng, J., Shi, S., Li, P., Zhou, W., Zhang, Y., Li, H.: Voxel R-CNN: towards high performance voxel-based 3D object detection. CoRR (2020). https://arxiv.org/abs/2012.15712
Li, J., et al.: P2VRCNN: point to voxel feature learning for 3D object detection from point clouds (2021). https://doi.org/10.1109/ACCESS.2021.3094562
Shi, S., et al.: PV-RCNN: point- voxel feature set abstraction for 3D object detection, pp. 10526–10535 (2020). https://doi.org/10.1109/CVPR42600.2020.01054
Shi, S., et al.: PVRCNN++: point-voxel feature set abstraction with local vector representation for 3D object detection, CoRR (2022). https://arxiv.org/abs/2102.00463
Wang, K., Zhang, Z.: Point-voxel fusion for multimodal 3D detection (2022). https://doi.org/10.1109/IV51971.2022.9827226
Cennamo, A., Kästner, F., Kummert, A.: Towards pedestrian detection in radar point clouds with PointNets, pp. 1–7 (2021). https://doi.org/10.1145/3459066.3459067
Wang, Z., Jia, K.: Frustum ConvNet: sliding frustums to aggregate local point-wise features for amodal (2019). https://doi.org/10.1109/IROS40897.2019.8968513
Ku, J., Mozifian, M., Lee, J., Harakeh, A., Waslander, S.L.: Joint 3D proposal generation and object detection from view aggregation (2018). https://doi.org/10.1109/IROS.2018.8594049
Dimitrievski, M.D., Shopovska, I., Hamme, D.V., Veelaert, P., Philips, W.: Weakly supervised deep learning method for vulnerable road user detection in FMCW radar (2020). https://doi.org/10.1109/ITSC45102.2020.9294399
Yang, B., Guo, R., Liang, M., Casas, S., Urtasun, R.: RadarNet: exploiting radar for robust perception of dynamic objects (2020). https://doi.org/10.1007/978-3-030-58523-5_29
Nobis, F., Shafiei, E., Karle, P., Betz, J., Lienkamp, M.: Radar voxel fusion for 3D object detection (2021). https://doi.org/10.3390/app11125598
Wang, L., Chen, T., Anklam, C., Goldluecke, B.: High dimensional frustum PointNet for 3D object detection from camera, LiDAR, and Radar (2020). https://doi.org/10.1109/IV47402.2020.9304655
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
De Muirier, M., Pareigis, S., Tiedemann, T. (2023). A Survey on Pedestrian Detection: Towards Integrating Vulnerable Road Users into Sensor Networks. In: Unger, H., Schaible, M. (eds) Real-time and Autonomous Systems 2022. Real-Time 2022. Lecture Notes in Networks and Systems, vol 674. Springer, Cham. https://doi.org/10.1007/978-3-031-32700-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-32700-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-32699-8
Online ISBN: 978-3-031-32700-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)