RISPNet: A Network for Reversed Image Signal Processing

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13802))

Included in the following conference series:

Abstract

RAW data is considered to be valuable owing to its beneficial properties for downstream tasks, such as denoising and HDR. Usually, RAW data gets rendered to RGB images by the in-camera image signal processor (ISP) and shall not be saved in most cases. In addition, RGB images produced by non-linear operations in ISP would necessitate the execution of other image processing tasks. To overcome this problem and acquire RAW data again, we propose a new reversed ISP network (RISPNet) to achieve efficient RGB to RAW image conversion. Our main proposal is a novel encoder-decoder network with a third-order attention module. Since Attention facilitates the complex trade-off between restoring spatial details and high-level contextual information, the accuracy of recovering high dynamic range RAW data is greatly improved, and the whole recovery process becomes a more manageable step. Benefiting from the design of the attention module, our RISPNet has a remarkable ability to recover RAW images. According to the AIM 2022 Reversed ISP Challenge, RISPNet achieved third place on both Track P20 and Track S7 test sets.

X. Dong and Y. Zhu—Equal Contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. ar**v preprint ar**v:1607.06450 (2016)

  2. Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  3. Chen, L., Chu, X., Zhang, X., Sun, J.: Simple baselines for image restoration. ar**v preprint ar**v:2204.04676 (2022)

  4. Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: HINet: half instance normalization network for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 182–192 (2021)

    Google Scholar 

  5. Conde, M.V., McDonagh, S., Maggioni, M., Leonardis, A., Pérez-Pellitero, E.: Model-based image signal processors via learnable dictionaries. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 481–489 (2022)

    Google Scholar 

  6. Conde, M.V., Timofte, R., et al.: Reversed image signal processing and raw reconstruction. AIM 2022 challenge report. In: Proceedings of the European Conference on Computer Vision Workshops (ECCVW) (2022)

    Google Scholar 

  7. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). https://doi.org/10.1109/TPAMI.2010.168

    Article  Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)

    Google Scholar 

  9. Hendrycks, D., Gimpel, K.: Gaussian error linear units (GELUs). ar**v preprint ar**v:1606.08415 (2016)

  10. Huang, L., Zhang, C., Zhang, H.: Self-adaptive training: beyond empirical risk minimization. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 19365–19376. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper/2020/file/e0ab531ec312161511493b002f9be2ee-Paper.pdf

  11. Ignatov, A., Van Gool, L., Timofte, R.: Replacing mobile camera ISP with a single deep learning model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 536–537 (2020)

    Google Scholar 

  12. Kopf, J., et al.: Deep photo: model-based photograph enhancement and viewing. ACM Trans. Graphics (TOG) 27(5), 1–10 (2008)

    Article  Google Scholar 

  13. Kousha, S., Maleky, A., Brown, M.S., Brubaker, M.A.: Modeling sRGB camera noise with normalizing flows. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17463–17471 (2022)

    Google Scholar 

  14. Kronander, J., Gustavson, S., Bonnet, G., Unger, J.: Unified HDR reconstruction from raw CFA data. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–9. IEEE (2013)

    Google Scholar 

  15. Lee, H., Choi, H., Sohn, K., Min, D.: KNN local attention for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2139–2149 (2022)

    Google Scholar 

  16. Li, Y., Zhang, K., Cao, J., Timofte, R., Van Gool, L.: LocalViT: bringing locality to vision transformers. ar**v preprint ar**v:2104.05707 (2021)

  17. Liu, J., et al.: Learning raw image denoising with bayer pattern unification and bayer preserving augmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2019)

    Google Scholar 

  18. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. ar**v preprint ar**v:1711.05101 (2017)

  19. Mei, Y., et al.: Pyramid attention networks for image restoration. ar**v preprint ar**v:2004.13824 (2020)

  20. Mou, C., Zhang, J., Fan, X., Liu, H., Wang, R.: COLA-net: collaborative attention network for image restoration. IEEE Trans. Multimed. 24, 1366–1377 (2022). https://doi.org/10.1109/TMM.2021.3063916

    Article  Google Scholar 

  21. Qi, J., Qi, N., Zhu, Q.: SUnet++: joint demosaicing and denoising of extreme low-light raw image. In: Þór Jónsson, B., et al. (eds.) MMM 2022. LNCS, vol. 13142, pp. 171–181. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98355-0_15

  22. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  23. Schwartz, E., Giryes, R., Bronstein, A.M.: DeepISP: toward learning an end-to-end image processing pipeline. IEEE Trans. Image Process. 28(2), 912–923 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  24. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  25. Wang, Y., Huang, H., Xu, Q., Liu, J., Liu, Y., Wang, J.: Practical deep raw image denoising on mobile devices. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 1–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_1

    Chapter  Google Scholar 

  26. Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: UFormer: a general U-shaped transformer for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17683–17693 (2022)

    Google Scholar 

  27. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1

    Chapter  Google Scholar 

  28. **ng, Y., Qian, Z., Chen, Q.: Invertible image signal processing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6287–6296 (2021)

    Google Scholar 

  29. Yuan, K., Guo, S., Liu, Z., Zhou, A., Yu, F., Wu, W.: Incorporating convolution designs into visual transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 579–588 (2021)

    Google Scholar 

  30. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5728–5739 (2022)

    Google Scholar 

  31. Zamir, S.W., et al.: CycleISP: real image restoration via improved data synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  32. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14821–14831 (2021)

    Google Scholar 

  33. Zhang, K., Li, Y., Zuo, W., Zhang, L., Van Gool, L., Timofte, R.: Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 1 (2021). https://doi.org/10.1109/TPAMI.2021.3088914

  34. Zhang, Y., Qin, H., Wang, X., Li, H.: Rethinking noise synthesis and modeling in raw denoising. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4593–4601 (2021)

    Google Scholar 

  35. Zhang, Y., Li, K., Li, K., Zhong, B., Fu, Y.: Residual non-local attention networks for image restoration. ar**v preprint ar**v:1903.10082 (2019)

  36. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2480–2495 (2021). https://doi.org/10.1109/TPAMI.2020.2968521

    Article  Google Scholar 

  37. Zhu, Y., et al.: EEDNet: enhanced encoder-decoder network for AutoISP. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12537, pp. 171–184. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-67070-2_10

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chenghua Li .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 283 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dong, X., Zhu, Y., Li, C., Wang, P., Cheng, J. (2023). RISPNet: A Network for Reversed Image Signal Processing. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13802. Springer, Cham. https://doi.org/10.1007/978-3-031-25063-7_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25063-7_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25062-0

  • Online ISBN: 978-3-031-25063-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation