Abstract
RAW data is considered to be valuable owing to its beneficial properties for downstream tasks, such as denoising and HDR. Usually, RAW data gets rendered to RGB images by the in-camera image signal processor (ISP) and shall not be saved in most cases. In addition, RGB images produced by non-linear operations in ISP would necessitate the execution of other image processing tasks. To overcome this problem and acquire RAW data again, we propose a new reversed ISP network (RISPNet) to achieve efficient RGB to RAW image conversion. Our main proposal is a novel encoder-decoder network with a third-order attention module. Since Attention facilitates the complex trade-off between restoring spatial details and high-level contextual information, the accuracy of recovering high dynamic range RAW data is greatly improved, and the whole recovery process becomes a more manageable step. Benefiting from the design of the attention module, our RISPNet has a remarkable ability to recover RAW images. According to the AIM 2022 Reversed ISP Challenge, RISPNet achieved third place on both Track P20 and Track S7 test sets.
X. Dong and Y. Zhu—Equal Contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. ar**v preprint ar**v:1607.06450 (2016)
Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Chen, L., Chu, X., Zhang, X., Sun, J.: Simple baselines for image restoration. ar**v preprint ar**v:2204.04676 (2022)
Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: HINet: half instance normalization network for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 182–192 (2021)
Conde, M.V., McDonagh, S., Maggioni, M., Leonardis, A., Pérez-Pellitero, E.: Model-based image signal processors via learnable dictionaries. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 481–489 (2022)
Conde, M.V., Timofte, R., et al.: Reversed image signal processing and raw reconstruction. AIM 2022 challenge report. In: Proceedings of the European Conference on Computer Vision Workshops (ECCVW) (2022)
He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). https://doi.org/10.1109/TPAMI.2010.168
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
Hendrycks, D., Gimpel, K.: Gaussian error linear units (GELUs). ar**v preprint ar**v:1606.08415 (2016)
Huang, L., Zhang, C., Zhang, H.: Self-adaptive training: beyond empirical risk minimization. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 19365–19376. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper/2020/file/e0ab531ec312161511493b002f9be2ee-Paper.pdf
Ignatov, A., Van Gool, L., Timofte, R.: Replacing mobile camera ISP with a single deep learning model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 536–537 (2020)
Kopf, J., et al.: Deep photo: model-based photograph enhancement and viewing. ACM Trans. Graphics (TOG) 27(5), 1–10 (2008)
Kousha, S., Maleky, A., Brown, M.S., Brubaker, M.A.: Modeling sRGB camera noise with normalizing flows. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17463–17471 (2022)
Kronander, J., Gustavson, S., Bonnet, G., Unger, J.: Unified HDR reconstruction from raw CFA data. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–9. IEEE (2013)
Lee, H., Choi, H., Sohn, K., Min, D.: KNN local attention for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2139–2149 (2022)
Li, Y., Zhang, K., Cao, J., Timofte, R., Van Gool, L.: LocalViT: bringing locality to vision transformers. ar**v preprint ar**v:2104.05707 (2021)
Liu, J., et al.: Learning raw image denoising with bayer pattern unification and bayer preserving augmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2019)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. ar**v preprint ar**v:1711.05101 (2017)
Mei, Y., et al.: Pyramid attention networks for image restoration. ar**v preprint ar**v:2004.13824 (2020)
Mou, C., Zhang, J., Fan, X., Liu, H., Wang, R.: COLA-net: collaborative attention network for image restoration. IEEE Trans. Multimed. 24, 1366–1377 (2022). https://doi.org/10.1109/TMM.2021.3063916
Qi, J., Qi, N., Zhu, Q.: SUnet++: joint demosaicing and denoising of extreme low-light raw image. In: Þór Jónsson, B., et al. (eds.) MMM 2022. LNCS, vol. 13142, pp. 171–181. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98355-0_15
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Schwartz, E., Giryes, R., Bronstein, A.M.: DeepISP: toward learning an end-to-end image processing pipeline. IEEE Trans. Image Process. 28(2), 912–923 (2018)
Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Wang, Y., Huang, H., Xu, Q., Liu, J., Liu, Y., Wang, J.: Practical deep raw image denoising on mobile devices. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 1–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_1
Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: UFormer: a general U-shaped transformer for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17683–17693 (2022)
Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1
**ng, Y., Qian, Z., Chen, Q.: Invertible image signal processing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6287–6296 (2021)
Yuan, K., Guo, S., Liu, Z., Zhou, A., Yu, F., Wu, W.: Incorporating convolution designs into visual transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 579–588 (2021)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5728–5739 (2022)
Zamir, S.W., et al.: CycleISP: real image restoration via improved data synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Zamir, S.W., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14821–14831 (2021)
Zhang, K., Li, Y., Zuo, W., Zhang, L., Van Gool, L., Timofte, R.: Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 1 (2021). https://doi.org/10.1109/TPAMI.2021.3088914
Zhang, Y., Qin, H., Wang, X., Li, H.: Rethinking noise synthesis and modeling in raw denoising. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4593–4601 (2021)
Zhang, Y., Li, K., Li, K., Zhong, B., Fu, Y.: Residual non-local attention networks for image restoration. ar**v preprint ar**v:1903.10082 (2019)
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2480–2495 (2021). https://doi.org/10.1109/TPAMI.2020.2968521
Zhu, Y., et al.: EEDNet: enhanced encoder-decoder network for AutoISP. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12537, pp. 171–184. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-67070-2_10
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, X., Zhu, Y., Li, C., Wang, P., Cheng, J. (2023). RISPNet: A Network for Reversed Image Signal Processing. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13802. Springer, Cham. https://doi.org/10.1007/978-3-031-25063-7_27
Download citation
DOI: https://doi.org/10.1007/978-3-031-25063-7_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25062-0
Online ISBN: 978-3-031-25063-7
eBook Packages: Computer ScienceComputer Science (R0)