Wide Activation Fourier Channel Attention Network for Super-Resolution

  • Conference paper
  • First Online:
Digital Multimedia Communications (IFTC 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2066))

  • 37 Accesses

Abstract

Attention mechanisms, especially channel attention, have been widely used in a wide range of tasks in computer vision. More recently, researchers have begun to apply channel attention mechanisms to tasks involving single image super-resolution (SISR). However, these mechanisms, borrowed from other computer vision tasks, may not be well-suited for SISR, which primarily focuses on re-covering high-frequency information. Consequently, existing approaches may not adequately reconstruct high-frequency details. To address this limitation, we propose a novel channel attention block, i.e., the Fourier channel attention block (FCA). This block leverages the Fourier transform to extract high-frequency information and subsequently compresses the spatial information, thereby emphasizing the high-frequency components within the image. To further enhance the performance, we propose a wide activation Fourier channel attention super-resolution network (WFCASR) to enhance the residual block by incorporating the wide activation mechanism and FCA. Results in the development of. By integrating the FCA block and the wide activation mechanism into our network, the high-frequency information can be effectively reconstructed and thus the accuracy and effectiveness of SISR can be effectively improved. Experimental results demonstrated that Our FCA channel attention mechanism has better performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Freeman, W.T., Pasztor, E.C., Carmichael, O.T.: Learning low-level vision. Int. J. Comput. Vis. 40, 25–47 (2000)

    Google Scholar 

  2. Clerk Maxwell, J.: A Treatise on Electricity and Magnetism, 3rd edn., vol. 2, pp. 68–73. Clarendon, Oxford (1892)

    Google Scholar 

  3. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, Part IV, vol. 8692, pp. 184–199. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10593-2_13

  4. Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: CVPR 2016, pp. 1646–1654 (2016)

    Google Scholar 

  5. Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)

    Google Scholar 

  6. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR 2017, pp. 4681–4690 (2017)

    Google Scholar 

  7. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPR 2017 Workshops, pp. 136–144 (2017)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR 2016, pp. 770–778 (2016)

    Google Scholar 

  9. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR 2018, pp. 2472–2481 (2018)

    Google Scholar 

  10. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the CVPR 2018, pp. 7132–7141 (2018)

    Google Scholar 

  11. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1

  12. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 294–310. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_18

  13. Li, Y., et al.: Single-image super-resolution for remote sensing images using a deep generative adversarial network with local and global attention mechanisms. IEEE Trans. Geosci. Remote Sens. 60, 1–24 (2021)

    Google Scholar 

  14. Pan, B., Qu, Q., Xu, X., Shi, Z.: Structure–color preserving network for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 60, 1–12 (2021)

    Google Scholar 

  15. Yang, Y., Wang, X., Gao, X., Hui, Z.: Lightweight image super-resolution with local attention enhancement. In: Peng, Y., et al. (eds.) PRCV 2020. LNCS, Part I, vol. 12305, pp. 219–231. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60633-6_18

  16. **n, J., Jiang, X., Wang, N., Li, J., Gao, X.: Image super-resolution via deep feature recalibration network. In: Peng, Y., et al. (eds.) PRCV 2020. LNCS, Part I, vol. 12305, pp. 256–267. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60633-6_21

  17. Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: SwinIR: image restoration using SWIN transformer. In: CVPR 2021, pp. 1833–1844 (2021)

    Google Scholar 

  18. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: CVPR 2021, pp. 10012–10022 (2021)

    Google Scholar 

  19. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, 30 (2017)

    Google Scholar 

  20. Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: CVPR2023, pp. 22367–22377 (2023)

    Google Scholar 

  21. Wang, H., Chen, X., Ni, B., Liu, Y., Liu, J.: Omni aggregation networks for lightweight image super-resolution. In: CVPR 2023, pp. 22378–22387 (2023)

    Google Scholar 

  22. Yu, J., et al.: Wide activation for efficient and accurate image super-resolution. ar**v preprint ar**v:1808.08718 (2018)

  23. Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, Part II, vol. 9906, pp. 391–407. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_25

  24. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: CVPR 2016, pp. 1874–1883 (2016)

    Google Scholar 

  25. Liu, J., Tang, J., Wu, G.: Residual feature distillation network for lightweight image super-resolution. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12537, Part III, pp. 41–55. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-67070-2_2

  26. Hu, Y., Li, J., Huang, Y., Gao, X.: Channel-wise and spatial feature modulation network for single image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 30(11), 3911–3927 (2019)

    Article  Google Scholar 

  27. Zhang, K., Zuo, W., Zhang, L.: Learning a single convolutional super-resolution network for multiple degradations. In: CVPR 2018, pp. 3262–3271 (2018)

    Google Scholar 

  28. Hui, Z., Gao, X., Yang, Y., Wang, X.: Lightweight image super-resolution with information multi-distillation network. In: ACM MM, pp. 2024–2032 (2019)

    Google Scholar 

  29. Tai, Y., Yang, J., Liu, X., Xu, C.: MemNet: a persistent memory network for image restoration. In: CVPR 2017, pp. 4539–4547 (2017)

    Google Scholar 

  30. Wang, X., et al.: Lightweight single-image super-resolution network with attentive auxiliary feature learning. In: Proceedings of the Asian conference on computer vision (2020)

    Google Scholar 

  31. Ahn, N., Kang, B., Sohn, K.A.: Fast, accurate, and lightweight super-resolution with cascading residual network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 256–272. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_16

  32. Kim, J.H., Choi, J.H., Cheon, M., Lee, J.S.: Ram: residual attention module for single image super-resolution, vol. 2, no. 1, 2. ar**v preprint ar**v:1811.12043 (2018)

  33. Timofte, R., et al.: NTIRE 2017 challenge on single image super-resolution: methods and results. In: CVPR Workshops 2017, pp. 1110–1121 (2017)

    Google Scholar 

  34. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC, pp. 1–10. BMVA Press (2012)

    Google Scholar 

  35. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.D., et al. (eds.) Curves and Surfaces. Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27413-8_47

  36. Martin, D.R., Fowlkes, C.C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV 2001, pp. 416–425 (2001)

    Google Scholar 

  37. Huang, J., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: CVPR 2015, pp. 5197–5206. IEEE Computer Society (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, X., Tan, M., Chen, L., Wu, Y. (2024). Wide Activation Fourier Channel Attention Network for Super-Resolution. In: Zhai, G., Zhou, J., Ye, L., Yang, H., An, P., Yang, X. (eds) Digital Multimedia Communications. IFTC 2023. Communications in Computer and Information Science, vol 2066. Springer, Singapore. https://doi.org/10.1007/978-981-97-3623-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-3623-2_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-3622-5

  • Online ISBN: 978-981-97-3623-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation