Rethinking the Defocus Blur Detection Problem and a Real-Time Deep DBD Model

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12355))

Included in the following conference series:

Abstract

Defocus blur detection (DBD) is a classical low level vision task. It has recently attracted attention focusing on designing complex convolutional neural networks (CNN) which make full use of both low level features and high level semantic information. The heavy networks used in these methods lead to low processing speed, resulting difficulty in applying to real-time applications. In this work, we propose novel perspectives on the DBD problem and design convenient approach to build a real-time cost-effective DBD model. First, we observe that the semantic information does not always relate to and sometimes mislead the blur detection. We start from the essential characteristics of the DBD problem and propose a data augmentation method accordingly to inhibit the semantic information and enforce the model to learn image blur related features rather than the semantic features. A novel self-supervision training objective is proposed to enhance the model training consistency and stability. Second, by rethinking the relationship between defocus blur detection and salience detection, we identify two previously ignored but common scenarios, based on which we design a hard mining strategy to enhance the DBD model. By using the proposed techniques, our model that uses a slightly modified U-Net as backbone, improves the processing speed by more than 3 times and performs competitively against state of the art methods. Ablation study is also conducted to verify the effectiveness of each part of our proposed methods.

Work was partly supported by National Key Research and Development Program of China 2018AAA0100704, NSFC (61972250, U19B2035), and SJTU Global Strategic Partnership Fund (2020 SJTU-CORNELL).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chen, L., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. ar**v\({:}\) Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  2. Elder, J.H., Zucker, S.W.: Local scale control for edge detection and blur estimation. IEEE Trans. Pattern Anal. Mach. Intell. 20(7), 699–716 (1998)

    Article  Google Scholar 

  3. Golestaneh, S.A., Karam, L.J.: Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In: Computer Vision and Pattern Recognition, pp. 596–605 (2017)

    Google Scholar 

  4. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Improving neural networks by preventing co-adaptation of feature detectors. ar**v\(:\) Neural and Evolutionary Computing (2012)

    Google Scholar 

  5. Hou, Q., Cheng, M., Hu, X., Borji, A., Tu, Z., Torr, P.H.S.: Deeply supervised salient object detection with short connections. IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 815–828 (2019)

    Article  Google Scholar 

  6. Huang, R., Feng, W., Fan, M., Wan, L., Sun, J.: Multiscale blur detection by learning discriminative deep features. Neurocomputing 285, 154–166 (2018)

    Article  Google Scholar 

  7. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. ar**v\(:\) Learning (2015)

    Google Scholar 

  8. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. ar**v\(:\) Learning (2014)

    Google Scholar 

  9. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Neural Inf. Process. Syst. 141(5), 1097–1105 (2012)

    Google Scholar 

  10. Liu, R., Li, Z., Jia, J.: Image partial blur detection and classification. Comput. Vis. Pattern Recognit., 1–8 (2008)

    Google Scholar 

  11. Pang, Y., Zhu, H., Li, X., Li, X.: Classifying discriminative features for blur detection. IEEE Trans. Syst. Man Cybern. 46(10), 2220–2227 (2016)

    Google Scholar 

  12. Park, J., Tai, Y.W., Cho, D., Kweon, I.S.: A unified approach of multi-scale deep and hand-crafted features for defocus estimation. In: Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  13. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  14. Rutishauser, U., Walther, D., Koch, C., Perona, P.: Is bottom-up attention useful for object recognition? In: Computer Vision and Pattern Recognition, vol. 2, pp. 37–44 (2004)

    Google Scholar 

  15. Saad, E., Hirakawa, K.: Defocus blur-invariant scale-space feature extractions. IEEE Trans. Image Process. 25(7), 3141–3156 (2016)

    Article  MathSciNet  Google Scholar 

  16. Shi, J., Li, X., Jia, J.: Discriminative blur detection features. In: Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ar**v\(:\) Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  18. Su, B., Lu, S., Tan, C.L.: Blurred image region detection and classification. In: ACM Multimedia, pp. 1397–1400 (2011)

    Google Scholar 

  19. Tai, Y., Brown, M.S.: Single image defocus map estimation using local contrast prior. In: International Conference on Image Processing, pp. 1777–1780 (2009)

    Google Scholar 

  20. Tang, C., Wu, J., Hou, Y., Wang, P., Li, W.: A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Process. Lett. 23(11), 1652–1656 (2016)

    Article  Google Scholar 

  21. Tang, C., Zhu, X., Liu, X., Wang, L., Zomaya, A.: DeFusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2700–2709 (2019)

    Google Scholar 

  22. Vu, C.T., Phan, T.D., Chandler, D.M.: S3: a spectral and spatial measure of local perceived sharpness in natural images. IEEE Trans. Image Process. 21(3), 934–945 (2012)

    Article  MathSciNet  Google Scholar 

  23. Xu, G., Quan, Y., Ji, H.: Estimating defocus blur via rank of local patches. In: Computer Vision and Pattern Recognition, pp. 5381–5389 (2017)

    Google Scholar 

  24. Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: Computer Vision and Pattern Recognition (2010)

    Google Scholar 

  25. Zhang, S., Shen, X., Lin, Z., Mech, R., Costeira, J.P., Moura, J.M.F.: Learning to understand image blur. In: Computer Vision and Pattern Recognition, pp. 6586–6595 (2018)

    Google Scholar 

  26. Zhang, Y., Hirakawa, K.: Blur processing using double discrete wavelet transform. In: Computer Vision and Pattern Recognition, pp. 1091–1098 (2013)

    Google Scholar 

  27. Zhao, J., Feng, H., Xu, Z., Li, Q., Tao, X.: Automatic blur region segmentation approach using image matting. Signal Image Video Process. 7(6), 1173–1181 (2012). https://doi.org/10.1007/s11760-012-0381-6

    Article  Google Scholar 

  28. Zhao, W., Zhao, F., Wang, D., Lu, H.: Defocus blur detection via multi-stream bottom-top-bottom network. IEEE Trans. Pattern Anal. Mach. Intell. (2019)

    Google Scholar 

  29. Zhao, W., Zheng, B., Lin, Q., Lu, H.: Enhancing diversity of defocus blur detectors via cross-ensemble network. In: Computer Vision and Pattern Recognition, pp. 8905–8913 (2019)

    Google Scholar 

  30. Zhu, X., Cohen, S., Schiller, S.N., Milanfar, P.: Estimating spatially varying defocus blur from a single image. IEEE Trans. Image Process. 22(12), 4879–4891 (2013)

    Article  MathSciNet  Google Scholar 

  31. Zhuo, S., Sim, T.: Defocus map estimation from a single image. Pattern Recognit. 44(9), 1852–1858 (2011)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junchi Yan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, N., Yan, J. (2020). Rethinking the Defocus Blur Detection Problem and a Real-Time Deep DBD Model. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12355. Springer, Cham. https://doi.org/10.1007/978-3-030-58607-2_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58607-2_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58606-5

  • Online ISBN: 978-3-030-58607-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation