Log in

Attention-driven residual-dense network for no-reference image quality assessment

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

With the rapid development of deep learning, convolutional neural networks have been applied to no-reference image quality assessment (NR-IQA), but most methods focus on the design of complex networks, which not only increase network parameters and make the training process more difficult, but also fail to make full use of the rich global and local information in images. To address this problem, this paper proposed an effective NR-IQA method, namely, attention-driven residual dense network, which can evaluate the quality of images quickly and accurately. Specifically, three different sizes of convolution kernels are first used to extract features from images by parallel, so that the feature information of images can be expressed at different scales. Next, several cascaded residual dense channel attention blocks are used to further extract high-level feature information, which can capture the most effective feature. In addition, we embed a novel channel attention mechanism into the multi-scale feature extraction block and the residual dense block to filter out channel-specific attention by learning correlations between channels. A series of experiments on public synthetic databases show that the proposed method outperforms the state-of-the-art NR-IQA methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data Availability

The data sets supporting the results of this article are included within the article.

References

  1. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  2. Wang, Z., Simoncelli, E.P., Bovik, A.C.: “Multiscale structural similarity for image quality assessment,” in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, 2 IEEE, ,1398–1402 (2003)

  3. Wang, Z., Li, Q.: Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 20(5), 1185–1198 (2010)

    Article  MathSciNet  Google Scholar 

  4. Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8) (2011)

  5. Zhang, L., Shen, Y., Li, H.: VSI: a visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23(10), 4270–4281 (2014)

    Article  MathSciNet  Google Scholar 

  6. Xue, W., Zhang, L., Mou, X., Bovik, A.C.: Gradient magnitude similarity deviation: a highly efficient perceptual image quality index. IEEE Trans. Image Process. 23(2), 684–695 (2013)

    Article  MathSciNet  Google Scholar 

  7. Liu, A., Lin, W., Narwaria, M.: Image quality assessment based on gradient similarity. IEEE Trans. Image Process. 21(4), 1500–1512 (2011)

    MathSciNet  Google Scholar 

  8. Chang, H.-W., Du, C.-Y., Bi, X.-D., Chen, K., Wang, M.-H.: LG-IQA: integration of local and global features for no-reference image quality assessment. Displays 75, 102334 (2022)

    Article  Google Scholar 

  9. Zhou, F., Sheng, W., Lu, Z., Kang, B., Chen, M., Qiu, G.: Super-resolution image visual quality assessment based on structure-texture features. Signal Process. Image Commun. 117, 117025 (2023)

    Article  Google Scholar 

  10. Chen, J., Wang, B., He, S., **ng, Q., Su, X., Liu, W., Gao, G.: HISP: heterogeneous image signal processor pipeline combining traditional and deep learning algorithms implemented on FPGA. Electronics 12(16), 3525 (2023)

    Article  Google Scholar 

  11. Saad, M.A., Bovik, A.C., Charrier, C.: Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 21(8), 3339–3352 (2012)

    Article  MathSciNet  Google Scholar 

  12. Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)

    Article  MathSciNet  Google Scholar 

  13. Han, Z., Liu, Y., **e, R.: A large-scale image database for benchmarking mobile camera quality and NR-IQA algorithms. Displays 76, 102366 (2023)

    Article  Google Scholar 

  14. You, J., Korhonen, J.: Attention integrated hierarchical networks for no-reference image quality assessment. J. Vis. Commun. Image Represent. 82, 103399 (2022)

    Article  Google Scholar 

  15. Wang, J., Chen, Z., Yuan, C., Li, B., Ma, W., Hu, W.: Hierarchical curriculum learning for no-reference image quality assessment. Int. J. Comput. Vis. 131(11), 3074–3093 (2023)

    Article  Google Scholar 

  16. Zhang, L., Zhang, L., Bovik, A.C.: A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process. 24(8), 2579–2591 (2015)

    Article  MathSciNet  Google Scholar 

  17. Gu, K., Zhai, G., Yang, X., Zhang, W.: Deep learning network for blind image quality assessment. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 511–515. IEEE (2014)

  18. Kang, L., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1733–1740 (2014)

  19. Ma, K., Liu, W., Zhang, K., Duanmu, Z., Wang, Z., Zuo, W.: End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 27(3), 1202–1213 (2017)

    Article  MathSciNet  Google Scholar 

  20. Bosse, S., Maniry, D., Müller, K.-R., Wiegand, T., Samek, W.: Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017)

    Article  MathSciNet  Google Scholar 

  21. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  22. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ar**v preprint ar**v:1409.1556 (2014)

  24. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

  25. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  26. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

  27. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

  28. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. Adv. Neural Inf. Process. Syst. 27 (2014)

  29. Gong, D., Yang, J., Liu, L., Zhang, Y., Reid, I., Shen, C., Van Den Hengel, A., Shi, Q.: From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2319–2328 (2017)

  30. Zhang, L., Wei, W., Zhang, Y., Shen, C., Van Den Hengel, A., Shi, Q.: Cluster sparsity field: an internal hyperspectral imagery prior for reconstruction. Int. J. Comput. Vis. 126(8), 797–821 (2018)

    Article  Google Scholar 

  31. Yang, J., Gong, D., Liu, L., Shi, Q.: Seeing deeply and bidirectionally: a deep learning approach for single image reflection removal. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 654–669 (2018)

  32. Bosse, S., Maniry, D., Wiegand, T., Samek, W.: A deep neural network for image quality assessment. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 3773–3777. IEEE (2016)

  33. Kim, J., Lee, S.: Fully deep blind image quality predictor. IEEE J. Sel. Top. Signal Process. 11(1), 206–220 (2016)

    Article  Google Scholar 

  34. Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoders. ar**v preprint ar**v:1511.05644 (2015)

  35. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

  36. Lin, K.-Y., Wang, G.: Hallucinated-IQA: no-reference image quality assessment via adversarial learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 732–741 (2018)

  37. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)

  38. Li, H., Xu, Z., Taylor, G., Studer, C., Goldstein, T.: Visualizing the loss landscape of neural nets. Adv. Neural Inf. Process. Syst. 31 (2018)

  39. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. ar**v preprint ar**v:1412.6980 (2014)

  40. Sheikh, H.R., Sabir, M.F., Bovik, A.C.: A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15(11), 3440–3451 (2006)

    Article  Google Scholar 

  41. Larson, E.C., Chandler, D.M.: Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Imaging 19(1), 011006 (2010)

    Article  Google Scholar 

  42. Ponomarenko, N.N., **, L.,Ieremeiev, O., Lukin, V.V., Egiazarian, K.O., Astola, J.T., Vozel, B., Chehdi, K., Carli, M., Battisti, F.: Image database tid2013. Image Communication (2015)

  43. Jayaraman, D., Mittal, A., Moorthy, A.K., Bovik, A.C.: Objective quality assessment of multiply distorted images. In: 2012 Conference Record of the 46th Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pp. 1693–1697. IEEE (2012)

  44. Ye, P., Kumar, J., Kang, L., Doermann, D.: Unsupervised feature learning framework for no-reference image quality assessment. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1098–1105. IEEE (2012)

  45. Xue, W., Mou, X., Zhang, L., Bovik, A.C., Feng, X.: Blind image quality assessment using joint statistics of gradient magnitude and laplacian features. IEEE Trans. Image Process. 23(11), 4850–4862 (2014)

    Article  MathSciNet  Google Scholar 

  46. Xu, J., Ye, P., Li, Q., Du, H., Liu, Y., Doermann, D.: Blind image quality assessment based on high order statistics aggregation. IEEE Trans. Image Process. 25(9), 4444–4457 (2016)

    Article  MathSciNet  Google Scholar 

  47. Ghadiyaram, D., Bovik, A.C.: Perceptual quality prediction on authentically distorted images using a bag of features approach. J. Vis. 17(1), 32 (2017)

    Article  Google Scholar 

  48. Liu, X., Van De Weijer, J., Bagdanov, A.D.: RankIQA: learning from rankings for no-reference image quality assessment. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1040–1049 (2017)

  49. Kim, J., Zeng, H., Ghadiyaram, D., Lee, S., Zhang, L., Bovik, A.C.: Deep convolutional neural models for picture-quality prediction: challenges and solutions to data-driven image quality assessment. IEEE Signal Process. Mag. 34(6), 130–141 (2017)

    Article  Google Scholar 

  50. Kim, J., Nguyen, A.-D., Ahn, S., Luo, C., Lee, S.: Multiple level feature-based universal blind image quality assessment model. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 291–295. IEEE (2018)

  51. Wu, J., Zhang, M., Li, L., Dong, W., Shi, G., Lin, W.: No-reference image quality assessment with visual pattern degradation. Inf. Sci. 504, 487–500 (2019)

    Article  MathSciNet  Google Scholar 

  52. Chen, X., Zhang, Q., Lin, M., Yang, G., He, C.: No-reference color image quality assessment: from entropy to perceptual quality. EURASIP J. Image Video Process. 2019(1), 1–14 (2019)

    Article  Google Scholar 

  53. Yang, S., Jiang, Q., Lin, W., Wang, Y.: SGDNet: an end-to-end saliency-guided deep neural network for no-reference image quality assessment. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1383–1391 (2019)

  54. Dendi, S.V.R., Dev, C., Kothari, N., Channappayya, S.S.: Generating image distortion maps using convolutional autoencoders with application to no reference image quality assessment. IEEE Signal Process. Lett. 26(1), 89–93 (2018)

    Article  Google Scholar 

  55. Wu, J., Ma, J., Liang, F., Dong, W., Shi, G., Lin, W.: End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 29, 7414–7426 (2020)

    Article  Google Scholar 

  56. Yang, X., Li, F., Liu, H.: TTL-IQA: transitive transfer learning based no-reference image quality assessment. IEEE Trans. Multimedia 23, 4326–4340 (2020)

    Article  Google Scholar 

  57. Li, F., Zhang, Y., Cosman, P.C.: MMMNet: an end-to-end multi-task deep convolution neural network with multi-scale and multi-hierarchy fusion for blind image quality assessment. IEEE Trans. Circuits Syst. Video Technol. 31(12), 4798–4811 (2021)

    Article  Google Scholar 

  58. Golestaneh, S.A., Dadsetan, S., Kitani, K.M.: No-reference image quality assessment via transformers, relative ranking, and self-consistency. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1220–1230 (2022)

  59. Yang, C., He, Q., An, P.: Unsupervised blind image quality assessment via joint spatial and transform features. Sci. Rep. 13(1), 10865 (2023)

    Article  Google Scholar 

Download references

Funding

This work was supported by National Natural Science Foundation of China under Grants 61976027, the Foundation of Educational Department of Liaoning Province under Grant (JYTZD2023175), Liaoning Revitalization Talents Program (XLYC2008002).

Author information

Authors and Affiliations

Authors

Contributions

ZY wrote the main manuscript text and was responsible for the experimental design and main program. WCZ was responsible for the experimental data compilation and analysis. LX and SYN assisted in writing the relevant research part of the manuscript and assisted in writing the data preprocessing program. All authors reviewed the manuscript.

Corresponding author

Correspondence to Changzhong Wang.

Ethics declarations

Conflict of interest

The authors declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.

Ethical approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Wang, C., Lv, X. et al. Attention-driven residual-dense network for no-reference image quality assessment. SIViP 18 (Suppl 1), 537–551 (2024). https://doi.org/10.1007/s11760-024-03172-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-024-03172-7

Keywords

Navigation