Radatron: Accurate Detection Using Multi-resolution Cascaded MIMO Radar

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13699))

Included in the following conference series:

Abstract

Millimeter wave (mmWave) radars are becoming a more popular sensing modality in self-driving cars due to their favorable characteristics in adverse weather. Yet, they currently lack sufficient spatial resolution for semantic scene understanding. In this paper, we present Radatron, a system capable of accurate object detection using mmWave radar as a stand-alone sensor. To enable Radatron, we introduce a first-of-its-kind, high-resolution automotive radar dataset collected with a cascaded MIMO (Multiple Input Multiple Output) radar. Our radar achieves 5 cm range resolution and 1.2\(^\circ \) angular resolution, \(10\times \) finer than other publicly available datasets. We also develop a novel hybrid radar processing and deep learning approach to achieve high vehicle detection accuracy. We train and extensively evaluate Radatron to show it achieves \(92.6\%\) AP\(_{50}\) and \(56.3\%\) AP\(_{75}\) accuracy in 2D bounding box detection, an \(8\%\) and \(15.9 \%\) improvement over prior art respectively. Code and dataset is available on https://jguan.page/Radatron/.

S. Madani, J. Guan and W. Ahmed—Indicates equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (Canada)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We describe the virtual antenna array emulation in the supp. material.

References

  1. Bansal, K., Rungta, K., Zhu, S., Bharadia, D.: Pointillism: accurate 3D bounding box estimation with multi-radars. In: Proceedings of the 18th Conference on Embedded Networked Sensor Systems SenSys 2020, pp. 340–353 (2020)

    Google Scholar 

  2. Barnes, D., Gadd, M., Murcutt, P., Newman, P., Posner, I.: The oxford radar robotcar dataset: a radar extension to the oxford robotcar dataset. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 6433–6438. IEEE (2020)

    Google Scholar 

  3. Bijelic, M., et al.: Seeing through fog without seeing fog: deep multimodal sensor fusion in unseen adverse weather. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11682–11692 (2020)

    Google Scholar 

  4. Caesar, H., et al.: Nuscenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)

    Google Scholar 

  5. Chadwick, S., Maddern, W., Newman, P.: Distant vehicle detection using radar and vision (2019)

    Google Scholar 

  6. Cho, S., Lee, S.: Fast motion deblurring. In: ACM SIGGRAPH Asia 2009 Papers, pp. 1–8 (2009)

    Google Scholar 

  7. Danzer, A., Griebel, T., Bach, M., Dietmayer, K.: 2D car detection in radar data with pointNets. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 61–66 (2019)

    Google Scholar 

  8. Dong, X., Wang, P., Zhang, P., Liu, L.: Probabilistic oriented object detection in automotive radar. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 102–103 (2020)

    Google Scholar 

  9. Feng, D., et al.: Deep multi-modal object detection and semantic segmentation for autonomous driving: datasets, methods, and challenges. IEEE Trans. Intell. Transp. Syst. 22(3), 1341–1360 (2020)

    Article  Google Scholar 

  10. Gao, X., **ng, G., Roy, S., Liu, H.: Experiments with mmWave automotive radar test-bed. In: 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 1–6. IEEE (2019)

    Google Scholar 

  11. Gao, X., **ng, G., Roy, S., Liu, H.: Ramp-CNN: a novel neural network for enhanced automotive radar object recognition. IEEE Sensors J. 21(4), 5119–5132 (2021)

    Article  Google Scholar 

  12. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)

    Google Scholar 

  13. Golovachev, Y., Etinger, A., Pinhasi, G., Pinhasi, Y.: Propagation properties of sub-millimeter waves in foggy conditions. J. Appl. Phys. 125(15), 151612 (2019)

    Article  Google Scholar 

  14. Guan, J., Madani, S., Jog, S., Gupta, S., Hassanieh, H.: Through fog high-resolution imaging using millimeter wave radar. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)

    Google Scholar 

  17. Iovescu, C., Rao, S.: The fundamentals of millimeter wave sensors. In: Texas Instruments, pp. 1–8 (2017)

    Google Scholar 

  18. Kim, J., Kim, Y., Kum, D.: Low-level sensor fusion network for 3d vehicle detection using radar range-azimuth heatmap and monocular image. In: Proceedings of the Asian Conference on Computer Vision (ACCV) (2020)

    Google Scholar 

  19. Kim, Y., Choi, J.W., Kum, D.: Grif Net: gated region of interest fusion network for robust 3D object detection from radar point cloud and monocular image. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10857–10864. IEEE (2020)

    Google Scholar 

  20. Li, T., Fan, L., Zhao, M., Liu, Y., Katabi, D.: Making the invisible visible: action recognition through walls and occlusions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 872–881 (2019)

    Google Scholar 

  21. Lim, T.Y., et al.: Radar and camera early fusion for vehicle detection in advanced driver assistance systems. In: NeurIPS Machine Learning for Autonomous Driving Workshop (2019)

    Google Scholar 

  22. Lim, T.Y., Markowitz, S.A., Do, M.N.: Radical: a synchronized FMCW radar, depth, IMU and RGB camera data dataset with low-level FMCW radar signals. IEEE J. Select. Topics Signal Process. 15(4), 941–953 (2021)

    Article  Google Scholar 

  23. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  24. Long, Y., Morris, D., Liu, X., Castro, M., Chakravarty, P., Narayanan, P.: Radar-camera pixel depth association for depth completion (2021)

    Google Scholar 

  25. Lu, C.X., et al.: See through smoke: robust indoor map** with low-cost mmWave radar. In: Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services, MobiSys 2020, pp. 14–27. Association for Computing Machinery, New York, USA (2020)

    Google Scholar 

  26. Major, B., et al.: Vehicle detection with automotive radar using deep learning on range-azimuth-doppler tensors. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 924–932 (2019)

    Google Scholar 

  27. Manikas, A.: Beamforming: Sensor Signal Processing for Defence Applications, vol. 5. World Scientific (2015)

    Google Scholar 

  28. Meyer, M., Kuschk, G.: Automotive radar dataset for deep learning based 3D object detection. In: 2019 16th European Radar Conference (EuRAD), pp. 129–132. IEEE (2019)

    Google Scholar 

  29. Meyer, M., Kuschk, G., Tomforde, S.: Graph convolutional networks for 3D object detection on radar data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3060–3069 (2021)

    Google Scholar 

  30. Mostajabi, M., Wang, C.M., Ranjan, D., Hsyu, G.: High-resolution radar dataset for semi-supervised learning of dynamic objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 100–101 (2020)

    Google Scholar 

  31. Nabati, R., Qi, H.: Centerfusion: center-based radar and camera fusion for 3D object detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 1527–1536 (2021)

    Google Scholar 

  32. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: ICML (2010)

    Google Scholar 

  33. Nowruzi, F.E., et al.: Deep open space segmentation using automotive radar. In: 2020 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), pp. 1–4. IEEE (2020)

    Google Scholar 

  34. Ouaknine, A., Newson, A., Perez, P., Tupin, F., Rebut, J.: Multi-view radar semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15671–15680 (2021)

    Google Scholar 

  35. Ouaknine, A., Newson, A., Rebut, J., Tupin, F., Pérez, P.: Carrada dataset: camera and automotive radar with range-angle-doppler annotations. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5068–5075. IEEE (2021)

    Google Scholar 

  36. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)

    Google Scholar 

  37. Qian, K., Zhu, S., Zhang, X., Li, L.E.: Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 444–453 (2021)

    Google Scholar 

  38. Satat, G., Tancik, M., Raskar, R.: Towards photography through realistic fog. In: 2018 IEEE International Conference on Computational Photography (ICCP), pp. 1–10. IEEE (2018)

    Google Scholar 

  39. Schumann, O., Hahn, M., Dickmann, J., Wöhler, C.: Semantic segmentation on radar point clouds. In: 2018 21st International Conference on Information Fusion (FUSION), pp. 2179–2186 (2018)

    Google Scholar 

  40. Schumann, O., Wöhler, C., Hahn, M., Dickmann, J.: Comparison of random forest and long short-term memory network performances in classification tasks using radar. In: 2017 Sensor Data Fusion: Trends, Solutions, Applications (SDF), pp. 1–6 (2017). https://doi.org/10.1109/SDF.2017.8126350

  41. Shah, M., et al.: LiRaNet: end-to-end trajectory prediction using Spatio-temporal radar fusion (2020)

    Google Scholar 

  42. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. (TOG) 27(3), 1–10 (2008)

    Article  Google Scholar 

  43. Sheeny, M., De Pellegrin, E., Mukherjee, S., Ahrabian, A., Wang, S., Wallace, A.: Radiate: a radar dataset for automotive perception. ar**v preprint ar**v:2010.09076 (2020)

  44. Stereolabs Inc.: Zed stereo camera (2022). https://www.stereolabs.com/zed/ [Online; Accessed 7 Mar 2022]

  45. Texas Instruments Inc.: mmWave cascade imaging radar RF evaluation module (2022). https://www.ti.com/tool/MMWCAS-RF-EVM [Online; Accessed 7 Mar 2022]

  46. Times, N.Y.: 5 things that give self-driving cars headaches (2016). https://www.nytimes.com/interactive/2016/06/06/automobiles/autonomous-cars-problems.html

  47. Uhnder Inc.: Uhnder - digital automotive radar (2022). https://www.uhnder.com/ [Online; Accessed 7 Mar 2022]

  48. Wang, Y., Jiang, Z., Gao, X., Hwang, J.N., **ng, G., Liu, H.: RodNet: radar object detection using cross-modal supervision. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 504–513 (2021)

    Google Scholar 

  49. Wang, Y., Wang, G., Hsu, H.M., Liu, H., Hwang, J.N.: Rethinking of radar’s role: a camera-radar dataset and systematic annotator via coordinate alignment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2815–2824 (2021)

    Google Scholar 

  50. Waymo: A fog blog (2021). https://blog.waymo.com/2021/11/a-fog-blog.html

  51. Wu, Y., Kirillov, A., Massa, F., Lo, W.Y., Girshick, R.: Detectron2 (2019). https://github.com/facebookresearch/detectron2

  52. Golovachev, Y., et al.: Millimeter wave high resolution radar accuracy in fog conditions-theory and experimental verification. Sensors 18(7), 2148 (2018)

    Article  Google Scholar 

  53. Yang, B., Guo, R., Liang, M., Casas, S., Urtasun, R.: RadarNet: exploiting radar for robust perception of dynamic objects. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12363, pp. 496–512. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58523-5_29

    Chapter  Google Scholar 

  54. Zhang, A., Nowruzi, F.E., Laganiere, R.: Raddet: range-azimuth-doppler based radar object detection for dynamic road users. ar**v preprint ar**v:2105.00363 (2021)

  55. Zhang, Z., Tian, Z., Zhou, M.: Latern: dynamic continuous hand gesture recognition using FMCW radar sensor. IEEE Sens. J. 18(8), 3278–3289 (2018). https://doi.org/10.1109/JSEN.2018.2808688

    Article  Google Scholar 

  56. Zhao, M., et al.: Through-wall human pose estimation using radio signals. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7356–7365 (2018)

    Google Scholar 

  57. Zhao, M., et al.: Through-wall human mesh recovery using radio signals. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10112–10121 (2019)

    Google Scholar 

  58. Zhao, M., et al.: Rf-based 3D skeletons. In: Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication, SIGCOMM 2018, pp. 267–281. Association for Computing Machinery, New York, USA (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sohrab Madani .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 10371 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Madani, S., Guan, J., Ahmed, W., Gupta, S., Hassanieh, H. (2022). Radatron: Accurate Detection Using Multi-resolution Cascaded MIMO Radar. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13699. Springer, Cham. https://doi.org/10.1007/978-3-031-19842-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19842-7_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19841-0

  • Online ISBN: 978-3-031-19842-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation