Abstract
Since RANSAC, a great deal of research has been devoted to improving both its accuracy and run-time. Still, only a few methods aim at recognizing invalid minimal samples early, before the often expensive model estimation and quality calculation are done. To this end, we propose NeFSAC, an efficient algorithm for neural filtering of motion-inconsistent and poorly-conditioned minimal samples. We train NeFSAC to predict the probability of a minimal sample leading to an accurate relative pose, only based on the pixel coordinates of the image correspondences. Our neural filtering model learns typical motion patterns of samples which lead to unstable poses, and regularities in the possible motions to favour well-conditioned and likely-correct samples. The novel lightweight architecture implements the main invariants of minimal samples for pose estimation, and a novel training scheme addresses the problem of extreme class imbalance. NeFSAC can be plugged into any existing RANSAC-based pipeline. We integrate it into USAC and show that it consistently provides strong speed-ups even under extreme train-test domain gaps – for example, the model trained for the autonomous driving scenario works on PhotoTourism too. We tested NeFSAC on more than 100 k image pairs from three publicly available real-world datasets and found that it leads to one order of magnitude speed-up, while often finding more accurate results than USAC alone. The source code is available at https://github.com/cavalli1234/NeFSAC.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Barath, D., Chin, T.J., Chum, O., Mishkin, D., Ranftl, R., Matas, J.: RANSAC in 2020 tutorial. In: CVPR (2020). http://cmp.felk.cvut.cz/cvpr2020-ransac-tutorial/
Barath, D., Matas, J.: Graph-cut RANSAC. In: CVPR, pp. 6733–6741 (2018)
Barath, D., Noskova, J., Ivashechkin, M., Matas, J.: MAGSAC++, a fast, reliable and accurate robust estimator. In: CVPR, pp. 1304–1312 (2020)
Barath, D., Noskova, J., Matas, J.: MAGSAC: marginalizing sample consensus. In: CVPR, pp. 10197–10205 (2019). https://github.com/danini/magsac
Barath, D., Noskova, J., Matas, J.: Marginalizing sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 8420–8432 (2021)
Bian, J., Lin, W.Y., Matsushita, Y., Yeung, S.K., Nguyen, T.D., Cheng, M.M.: GMS: grid-based motion statistics for fast, ultra-robust feature correspondence. In: CVPR, pp. 4181–4190 (2017)
Blanco-Claraco, J.L., Moreno-Duenas, F.A., González-Jiménez, J.: The Málaga urban dataset: high-rate stereo and LiDAR in a realistic urban scenario. Int. J. Robot. Res. 33(2), 207–214 (2014)
Brachmann, E., Rother, C.: Neural-guided RANSAC: learning where to sample model hypotheses. In: CVPR, pp. 4322–4331 (2019)
Brachmann, E., et al.: DSAC-differentiable RANSAC for camera localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6684–6692 (2017)
Cavalli, L., Larsson, V., Oswald, M.R., Sattler, T., Pollefeys, M.: Handcrafted outlier detection revisited. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 770–787. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_45
Chum, O., Matas, J.: Randomized RANSAC with tdd test. In: BMVC, vol. 2, pp. 448–457 (2002)
Chum, O., Matas, J.: Matching with PROSAC-progressive sample consensus. In: CVPR, vol. 1, pp. 220–226. IEEE (2005)
Chum, O., Matas, J.: Optimal randomized RANSAC. IEEE Trans. Pattern Anal. Mach. Intell. 30(8), 1472–1482 (2008)
Chum, O., Matas, J., Kittler, J.: Locally optimized RANSAC. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 236–243. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45243-0_31
Chum, O., Werner, T., Matas, J.: Two-view geometry estimation unaffected by a dominant plane. In: CVPR, vol. 1, pp. 772–779. IEEE (2005)
Ding, Y., Barath, D., Kukelova, Z.: Minimal solutions for panoramic stitching given gravity prior. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5579–5588 (2021)
Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)
Frahm, J.M., Pollefeys, M.: RANSAC for (quasi-)degenerate data (QDEGSAC). In: CVPR, vol. 1, pp. 453–460. IEEE (2006)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: CVPR, pp. 3354–3361. IEEE (2012)
Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)
Ivashechkin, M., Barath, D., Matas, J.: VSAC: efficient and accurate estimator for H and F. ICCV, pp. 15243–15252 (2021)
Lebeda, K., Matas, J., Chum, O.: Fixing the locally optimized RANSAC. In: BMVC. Citeseer (2012)
Matas, J., Chum, O.: Randomized RANSAC with sequential probability ratio test. In: ICCV, vol. 2, pp. 1727–1732. IEEE (2005)
Moisan, L., Moulon, P., Monasse, P.: Automatic homographic registration of a pair of images, with a contrario elimination of outliers. Image Process. On Line 2, 56–73 (2012)
Moo Yi, K., Trulls, E., Ono, Y., Lepetit, V., Salzmann, M., Fua, P.: Learning to find good correspondences. In: CVPR, pp. 2666–2674 (2018)
Ni, K., **, H., Dellaert, F.: GroupSAC: efficient consensus in the presence of grou**s. In: ICCV, pp. 2193–2200. IEEE (2009)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: CVPR, pp. 652–660 (2017)
Raguram, R., Chum, O., Pollefeys, M., Matas, J., Frahm, J.M.: USAC: a universal framework for random sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 2022–2038 (2013)
Ranftl, R., Koltun, V.: Deep fundamental matrix estimation. In: ECCV, pp. 284–299 (2018)
Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR, pp. 4938–4947 (2020)
Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3D. In: ACM siggraph 2006 papers, pp. 835–846 (2006)
Stewart, C.V.: MINPRAN: a new robust estimator for computer vision. IEEE Trans. Pattern Anal. Mach. Intell. 17(10), 925–938 (1995)
Tong, W., Matas, J., Barath, D.: Deep MAGSAC++. ar**v preprint ar**v:2111.14093 (2021)
Torr, P.H.S.: Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. Int. J. Comput. Vis. 50, 35–61 (2002). https://doi.org/10.1023/A:1020224303087
Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78(1), 138–156 (2000)
Torr, P.H., Nasuto, S.J., Bishop, J.M.: Napsac: high noise, high dimensional robust estimation-it’s in the bag. In: BMVC, vol. 2, p. 3 (2002)
Werner, T., Pajdla, T.: Cheirality in epipolar geometry. In: ICCV, vol. 1, pp. 548–553. IEEE (2001)
Zhang, J., et al.: Learning two-view correspondences and geometry using order-aware network. In: CVPR, pp. 5845–5854 (2019)
Acknowledgments
This work was supported by the ETH Zurich Postdoctoral Fellowship and the Google Focused Research Award.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cavalli, L., Pollefeys, M., Barath, D. (2022). NeFSAC: Neurally Filtered Minimal Samples. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13692. Springer, Cham. https://doi.org/10.1007/978-3-031-19824-3_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-19824-3_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19823-6
Online ISBN: 978-3-031-19824-3
eBook Packages: Computer ScienceComputer Science (R0)