Abstract
Different scores have been used in the literature to validate saliency models. While reviews of databases or saliency models exist, reviews of metrics are harder to come by. In this chapter, we will explain the standard measures used to evaluate the salient object detection and eye tracking models. While some metrics focus on eye scanpath, here we will deal with approaches involving 2D maps. The metrics are described and compared to show that they are more or less complementary.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Winkler, S., & Ramanathan, S. (2013). Overview of eye tracking datasets. In QoMEX, Klagenfurt am Wörthersee (pp. 212–217).
Borji, A., & Itti, L. (2013). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185–207.
Borji, A., Cheng, M.-M., Jiang, H., & Li, J. (2014). Salient object detection: A survey. ar**v preprint ar**v:1411.5878.
Le Meur, O., & Baccino, T. (2013). Methods for comparing scanpaths and saliency maps: Strengths and weaknesses. Behavior Research Methods, 45(1), 251–266.
Margolin, R., Zelnik-Manor, L., & Tal, A. (2014). How to evaluate foreground maps. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), Columbus (pp.248–255). IEEE.
Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009). Frequency-tuned salient region detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), Miami (pp. 1597–1604). IEEE.
Cheng, M.-M., Zhang, G.-X., Mitra, N. J., Huang, X., & Hu, S.-M. (2011). Global contrast based salient region detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), Colorado Springs (pp. 409–416). IEEE.
Perazzi, F., Krahenbuhl, P., Pritch, Y., & Hornung, A. (2012). Saliency filters: Contrast based filtering for salient region detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012), Providence (pp. 733–740). IEEE.
Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., & Shum, H.-Y. (2011). Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2), 353–367.
Cheng, M.-M., Warrell, J., Lin, W.-Y., Zheng, S., Vineet, V., & Crook, N. (2013). Efficient salient region detection with soft image abstraction. In IEEE International Conference on Computer Vision (ICCV 2013), Sydney (pp. 1529–1536). IEEE.
Li, J., Levine, M., An, X., & He, H. (2011). Saliency detection based on frequency and spatial domain analyses. In Proceedings of the British Machine Vision Conference (pp. 86.1–86.11). BMVA Press. http://dx.doi.org/10.5244/C.25.86.
Borji, A., Sihite, D. N., & Itti, L. (2012). Salient object detection: A benchmark. In Computer Vision–ECCV 2012, Florence (pp. 414–429). Springer.
Borji, A. (2015). What is a salient object? A dataset and a baseline model for salient object detection. IEEE Transactions on Image Processing, 24(2), 742–756.
Peters, R. J., Iyer, A., Itti, L., & Koch, L. (2005). Components of bottom-up gaze allocation in natural images. Vision Research, 45(18), 2397–2416.
Antonio Torralba, M. C., Oliva, A., & Henderson, J. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search. Psychological Review, 113(4), 766–786.
Peters, R. J., & Itti, L. (2008). Applying computational tools to predict gaze direction in interactive visual environments. ACM Transactions on Applied Perception (TAP), 5(2), 9.
Ouerhani, N., Von Wartburg, R., Hugli, H., & Muri, R. (2004). Empirical validation of the saliency-based model of visual attention. Electronic Letters on Computer Vision and Image Analysis, 3(1), 13–24.
Le Meur, O., Le Callet, P., Barba, D., et al. (2007). Predicting visual fixations on video based on low-level visual features. Vision Research, 47(19), 2483–2498.
Rajashekar, U., Cormack, L. K., & Bovik, A. C. (2004). Point-of-gaze analysis reveals visual search strategies. In Proceedings of SPIE, San Jose, USA (Vol. 5292, pp. 296–306).
Tatler, B. W., Baddeley, R. J., Gilchrist, I. D., et al. (2005). Visual correlates of fixation selection: Effects of scale and time. Vision Research, 45(5), 643–659.
Toet, A. (2011). Computational versus psychophysical bottom-up image saliency: A comparative evaluation study. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(11), 2131–2146.
Judd, T., Durand, F., & Torralba, A. (2012). A benchmark of computational models of saliency to predict human fixations. MIT technical report.
Pele, O., & Werman, M. (2008). A linear time histogram metric for improved sift matching. In Computer Vision–ECCV 2008, Marseille (pp. 495–508). Springer.
Pele, O., & Werman, M. (2009). Fast and robust earth mover’s distances. In IEEE 12th International Conference on Computer Vision 2009, Kyoto (pp. 460–467). IEEE.
Zhao, Q., & Koch, C. (2011). Learning a saliency map using fixated locations in natural scenes. Journal of Vision, 11(3), 9.
Borji, A., Sihite, D. N., & Itti, L. (2013). Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 22(1), 55–69.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media New York
About this chapter
Cite this chapter
Riche, N. (2016). Metrics for Saliency Model Validation. In: Mancas, M., Ferrera, V., Riche, N., Taylor, J. (eds) From Human Attention to Computational Attention. Springer Series in Cognitive and Neural Systems, vol 10. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-3435-5_12
Download citation
DOI: https://doi.org/10.1007/978-1-4939-3435-5_12
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-3433-1
Online ISBN: 978-1-4939-3435-5
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)