Metrics for Saliency Model Validation

  • Chapter
  • First Online:
From Human Attention to Computational Attention

Part of the book series: Springer Series in Cognitive and Neural Systems ((SSCNS,volume 10))

Abstract

Different scores have been used in the literature to validate saliency models. While reviews of databases or saliency models exist, reviews of metrics are harder to come by. In this chapter, we will explain the standard measures used to evaluate the salient object detection and eye tracking models. While some metrics focus on eye scanpath, here we will deal with approaches involving 2D maps. The metrics are described and compared to show that they are more or less complementary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 117.69
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 160.49
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
EUR 160.49
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Winkler, S., & Ramanathan, S. (2013). Overview of eye tracking datasets. In QoMEX, Klagenfurt am Wörthersee (pp. 212–217).

    Google Scholar 

  2. Borji, A., & Itti, L. (2013). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185–207.

    Article  PubMed  Google Scholar 

  3. Borji, A., Cheng, M.-M., Jiang, H., & Li, J. (2014). Salient object detection: A survey. ar**v preprint ar**v:1411.5878.

    Google Scholar 

  4. Le Meur, O., & Baccino, T. (2013). Methods for comparing scanpaths and saliency maps: Strengths and weaknesses. Behavior Research Methods, 45(1), 251–266.

    Article  PubMed  Google Scholar 

  5. Margolin, R., Zelnik-Manor, L., & Tal, A. (2014). How to evaluate foreground maps. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), Columbus (pp.248–255). IEEE.

    Google Scholar 

  6. Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009). Frequency-tuned salient region detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), Miami (pp. 1597–1604). IEEE.

    Google Scholar 

  7. Cheng, M.-M., Zhang, G.-X., Mitra, N. J., Huang, X., & Hu, S.-M. (2011). Global contrast based salient region detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), Colorado Springs (pp. 409–416). IEEE.

    Google Scholar 

  8. Perazzi, F., Krahenbuhl, P., Pritch, Y., & Hornung, A. (2012). Saliency filters: Contrast based filtering for salient region detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012), Providence (pp. 733–740). IEEE.

    Google Scholar 

  9. Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., & Shum, H.-Y. (2011). Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2), 353–367.

    Article  PubMed  Google Scholar 

  10. Cheng, M.-M., Warrell, J., Lin, W.-Y., Zheng, S., Vineet, V., & Crook, N. (2013). Efficient salient region detection with soft image abstraction. In IEEE International Conference on Computer Vision (ICCV 2013), Sydney (pp. 1529–1536). IEEE.

    Google Scholar 

  11. Li, J., Levine, M., An, X., & He, H. (2011). Saliency detection based on frequency and spatial domain analyses. In Proceedings of the British Machine Vision Conference (pp. 86.1–86.11). BMVA Press. http://dx.doi.org/10.5244/C.25.86.

  12. Borji, A., Sihite, D. N., & Itti, L. (2012). Salient object detection: A benchmark. In Computer Vision–ECCV 2012, Florence (pp. 414–429). Springer.

    Google Scholar 

  13. Borji, A. (2015). What is a salient object? A dataset and a baseline model for salient object detection. IEEE Transactions on Image Processing, 24(2), 742–756.

    Article  PubMed  Google Scholar 

  14. Peters, R. J., Iyer, A., Itti, L., & Koch, L. (2005). Components of bottom-up gaze allocation in natural images. Vision Research, 45(18), 2397–2416.

    Article  PubMed  Google Scholar 

  15. Antonio Torralba, M. C., Oliva, A., & Henderson, J. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search. Psychological Review, 113(4), 766–786.

    Article  PubMed  Google Scholar 

  16. Peters, R. J., & Itti, L. (2008). Applying computational tools to predict gaze direction in interactive visual environments. ACM Transactions on Applied Perception (TAP), 5(2), 9.

    Google Scholar 

  17. Ouerhani, N., Von Wartburg, R., Hugli, H., & Muri, R. (2004). Empirical validation of the saliency-based model of visual attention. Electronic Letters on Computer Vision and Image Analysis, 3(1), 13–24.

    Google Scholar 

  18. Le Meur, O., Le Callet, P., Barba, D., et al. (2007). Predicting visual fixations on video based on low-level visual features. Vision Research, 47(19), 2483–2498.

    Article  PubMed  Google Scholar 

  19. Rajashekar, U., Cormack, L. K., & Bovik, A. C. (2004). Point-of-gaze analysis reveals visual search strategies. In Proceedings of SPIE, San Jose, USA (Vol. 5292, pp. 296–306).

    Google Scholar 

  20. Tatler, B. W., Baddeley, R. J., Gilchrist, I. D., et al. (2005). Visual correlates of fixation selection: Effects of scale and time. Vision Research, 45(5), 643–659.

    Article  PubMed  Google Scholar 

  21. Toet, A. (2011). Computational versus psychophysical bottom-up image saliency: A comparative evaluation study. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(11), 2131–2146.

    Article  PubMed  Google Scholar 

  22. Judd, T., Durand, F., & Torralba, A. (2012). A benchmark of computational models of saliency to predict human fixations. MIT technical report.

    Google Scholar 

  23. Pele, O., & Werman, M. (2008). A linear time histogram metric for improved sift matching. In Computer Vision–ECCV 2008, Marseille (pp. 495–508). Springer.

    Google Scholar 

  24. Pele, O., & Werman, M. (2009). Fast and robust earth mover’s distances. In IEEE 12th International Conference on Computer Vision 2009, Kyoto (pp. 460–467). IEEE.

    Google Scholar 

  25. Zhao, Q., & Koch, C. (2011). Learning a saliency map using fixated locations in natural scenes. Journal of Vision, 11(3), 9.

    Article  PubMed  Google Scholar 

  26. Borji, A., Sihite, D. N., & Itti, L. (2013). Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 22(1), 55–69.

    Article  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolas Riche .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media New York

About this chapter

Cite this chapter

Riche, N. (2016). Metrics for Saliency Model Validation. In: Mancas, M., Ferrera, V., Riche, N., Taylor, J. (eds) From Human Attention to Computational Attention. Springer Series in Cognitive and Neural Systems, vol 10. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-3435-5_12

Download citation

Publish with us

Policies and ethics

Navigation