NeFSAC: Neurally Filtered Minimal Samples

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Since RANSAC, a great deal of research has been devoted to improving both its accuracy and run-time. Still, only a few methods aim at recognizing invalid minimal samples early, before the often expensive model estimation and quality calculation are done. To this end, we propose NeFSAC, an efficient algorithm for neural filtering of motion-inconsistent and poorly-conditioned minimal samples. We train NeFSAC to predict the probability of a minimal sample leading to an accurate relative pose, only based on the pixel coordinates of the image correspondences. Our neural filtering model learns typical motion patterns of samples which lead to unstable poses, and regularities in the possible motions to favour well-conditioned and likely-correct samples. The novel lightweight architecture implements the main invariants of minimal samples for pose estimation, and a novel training scheme addresses the problem of extreme class imbalance. NeFSAC can be plugged into any existing RANSAC-based pipeline. We integrate it into USAC and show that it consistently provides strong speed-ups even under extreme train-test domain gaps – for example, the model trained for the autonomous driving scenario works on PhotoTourism too. We tested NeFSAC on more than 100 k image pairs from three publicly available real-world datasets and found that it leads to one order of magnitude speed-up, while often finding more accurate results than USAC alone. The source code is available at https://github.com/cavalli1234/NeFSAC.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Barath, D., Chin, T.J., Chum, O., Mishkin, D., Ranftl, R., Matas, J.: RANSAC in 2020 tutorial. In: CVPR (2020). http://cmp.felk.cvut.cz/cvpr2020-ransac-tutorial/

  2. Barath, D., Matas, J.: Graph-cut RANSAC. In: CVPR, pp. 6733–6741 (2018)

    Google Scholar 

  3. Barath, D., Noskova, J., Ivashechkin, M., Matas, J.: MAGSAC++, a fast, reliable and accurate robust estimator. In: CVPR, pp. 1304–1312 (2020)

    Google Scholar 

  4. Barath, D., Noskova, J., Matas, J.: MAGSAC: marginalizing sample consensus. In: CVPR, pp. 10197–10205 (2019). https://github.com/danini/magsac

  5. Barath, D., Noskova, J., Matas, J.: Marginalizing sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 8420–8432 (2021)

    Google Scholar 

  6. Bian, J., Lin, W.Y., Matsushita, Y., Yeung, S.K., Nguyen, T.D., Cheng, M.M.: GMS: grid-based motion statistics for fast, ultra-robust feature correspondence. In: CVPR, pp. 4181–4190 (2017)

    Google Scholar 

  7. Blanco-Claraco, J.L., Moreno-Duenas, F.A., González-Jiménez, J.: The Málaga urban dataset: high-rate stereo and LiDAR in a realistic urban scenario. Int. J. Robot. Res. 33(2), 207–214 (2014)

    Article  Google Scholar 

  8. Brachmann, E., Rother, C.: Neural-guided RANSAC: learning where to sample model hypotheses. In: CVPR, pp. 4322–4331 (2019)

    Google Scholar 

  9. Brachmann, E., et al.: DSAC-differentiable RANSAC for camera localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6684–6692 (2017)

    Google Scholar 

  10. Cavalli, L., Larsson, V., Oswald, M.R., Sattler, T., Pollefeys, M.: Handcrafted outlier detection revisited. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 770–787. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_45

    Chapter  Google Scholar 

  11. Chum, O., Matas, J.: Randomized RANSAC with tdd test. In: BMVC, vol. 2, pp. 448–457 (2002)

    Google Scholar 

  12. Chum, O., Matas, J.: Matching with PROSAC-progressive sample consensus. In: CVPR, vol. 1, pp. 220–226. IEEE (2005)

    Google Scholar 

  13. Chum, O., Matas, J.: Optimal randomized RANSAC. IEEE Trans. Pattern Anal. Mach. Intell. 30(8), 1472–1482 (2008)

    Article  Google Scholar 

  14. Chum, O., Matas, J., Kittler, J.: Locally optimized RANSAC. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 236–243. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45243-0_31

    Chapter  Google Scholar 

  15. Chum, O., Werner, T., Matas, J.: Two-view geometry estimation unaffected by a dominant plane. In: CVPR, vol. 1, pp. 772–779. IEEE (2005)

    Google Scholar 

  16. Ding, Y., Barath, D., Kukelova, Z.: Minimal solutions for panoramic stitching given gravity prior. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5579–5588 (2021)

    Google Scholar 

  17. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  18. Frahm, J.M., Pollefeys, M.: RANSAC for (quasi-)degenerate data (QDEGSAC). In: CVPR, vol. 1, pp. 453–460. IEEE (2006)

    Google Scholar 

  19. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: CVPR, pp. 3354–3361. IEEE (2012)

    Google Scholar 

  20. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  21. Ivashechkin, M., Barath, D., Matas, J.: VSAC: efficient and accurate estimator for H and F. ICCV, pp. 15243–15252 (2021)

    Google Scholar 

  22. Lebeda, K., Matas, J., Chum, O.: Fixing the locally optimized RANSAC. In: BMVC. Citeseer (2012)

    Google Scholar 

  23. Matas, J., Chum, O.: Randomized RANSAC with sequential probability ratio test. In: ICCV, vol. 2, pp. 1727–1732. IEEE (2005)

    Google Scholar 

  24. Moisan, L., Moulon, P., Monasse, P.: Automatic homographic registration of a pair of images, with a contrario elimination of outliers. Image Process. On Line 2, 56–73 (2012)

    Article  Google Scholar 

  25. Moo Yi, K., Trulls, E., Ono, Y., Lepetit, V., Salzmann, M., Fua, P.: Learning to find good correspondences. In: CVPR, pp. 2666–2674 (2018)

    Google Scholar 

  26. Ni, K., **, H., Dellaert, F.: GroupSAC: efficient consensus in the presence of grou**s. In: ICCV, pp. 2193–2200. IEEE (2009)

    Google Scholar 

  27. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: CVPR, pp. 652–660 (2017)

    Google Scholar 

  28. Raguram, R., Chum, O., Pollefeys, M., Matas, J., Frahm, J.M.: USAC: a universal framework for random sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 2022–2038 (2013)

    Article  Google Scholar 

  29. Ranftl, R., Koltun, V.: Deep fundamental matrix estimation. In: ECCV, pp. 284–299 (2018)

    Google Scholar 

  30. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR, pp. 4938–4947 (2020)

    Google Scholar 

  31. Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3D. In: ACM siggraph 2006 papers, pp. 835–846 (2006)

    Google Scholar 

  32. Stewart, C.V.: MINPRAN: a new robust estimator for computer vision. IEEE Trans. Pattern Anal. Mach. Intell. 17(10), 925–938 (1995)

    Article  Google Scholar 

  33. Tong, W., Matas, J., Barath, D.: Deep MAGSAC++. ar**v preprint ar**v:2111.14093 (2021)

  34. Torr, P.H.S.: Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. Int. J. Comput. Vis. 50, 35–61 (2002). https://doi.org/10.1023/A:1020224303087

    Article  MATH  Google Scholar 

  35. Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78(1), 138–156 (2000)

    Article  Google Scholar 

  36. Torr, P.H., Nasuto, S.J., Bishop, J.M.: Napsac: high noise, high dimensional robust estimation-it’s in the bag. In: BMVC, vol. 2, p. 3 (2002)

    Google Scholar 

  37. Werner, T., Pajdla, T.: Cheirality in epipolar geometry. In: ICCV, vol. 1, pp. 548–553. IEEE (2001)

    Google Scholar 

  38. Zhang, J., et al.: Learning two-view correspondences and geometry using order-aware network. In: CVPR, pp. 5845–5854 (2019)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the ETH Zurich Postdoctoral Fellowship and the Google Focused Research Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Cavalli .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 152 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cavalli, L., Pollefeys, M., Barath, D. (2022). NeFSAC: Neurally Filtered Minimal Samples. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13692. Springer, Cham. https://doi.org/10.1007/978-3-031-19824-3_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19824-3_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19823-6

  • Online ISBN: 978-3-031-19824-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation