CaSE-NeRF: Camera Settings Editing of Neural Radiance Fields

  • Conference paper
  • First Online:
Advances in Computer Graphics (CGI 2023)

Abstract

Neural Radiance Fields (NeRF) have shown excellent quality in three-dimensional (3D) reconstruction by synthesizing novel views from multi-view images. However, previous NeRF-based methods do not allow users to perform user-controlled camera setting editing in the scene. While existing works have proposed methods to modify the radiance field, these modifications are limited to camera settings within the training set. Hence, we present Camera Settings Editing of Neural Radiance Fields (CaSE-NeRF) to recover a radiance field from a set of views with different camera settings. In our approach, we allow users to perform controlled camera settings editing on the scene and synthesize the novel view images of the edited scene without re-training the network. The key to our method lies in modeling each camera parameter separately and rendering various 3D defocus effects based on thin lens imaging principles. By following the image processing of real cameras, we implicitly model it and learn gains that are continuous in the latent space and independent of the image. The control of color temperature and exposure is plug-and-play, and can be easily integrated into NeRF-based frameworks. As a result, our method allows for manual and free post-capture control of the viewpoint and camera settings of 3D scenes. Through our extensive experiments on two real-scene datasets, we have demonstrated the success of our approach in reconstructing a normal NeRF with consistent 3D geometry and appearance. Our related code and data is available at https://github.com/CPREgroup/CaSE-NeRF.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.: Learning representations and generative models for 3D point clouds. In: International Conference on Machine Learning, pp. 40–49 (2018)

    Google Scholar 

  2. Afifi, M., Brubaker, M.A., Brown, M.S.: Auto white-balance correction for mixed-illuminant scenes. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1210–1219 (2022)

    Google Scholar 

  3. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864 (2021)

    Google Scholar 

  4. Bortolon, M., Del Bue, A., Poiesi, F.: VM-NeRF: tackling sparsity in NeRF with view morphing. In: International Conference on Image Analysis and Processing, pp. 63–74 (2023)

    Google Scholar 

  5. Chen, Z., Qiu, J., Sheng, B., Li, P., Wu, E.: GPSD: generative parking spot detection using multi-clue recovery model. Vis. Comput. 37(9–11), 2657–2669 (2021)

    Article  Google Scholar 

  6. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Proceedings of the European Conference on Computer Vision, pp. 333–350 (2022)

    Google Scholar 

  7. Debevec, P.E., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: ACM SIGGRAPH 2008 Classes, pp. 1–10 (2008)

    Google Scholar 

  8. Gafni, G., Thies, J., Zollhofer, M., Nießner, M.: Dynamic neural radiance fields for monocular 4D facial avatar reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8649–8658 (2021)

    Google Scholar 

  9. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 43–54 (1996)

    Google Scholar 

  10. Huang, X., Zhang, Q., Feng, Y., Li, H., Wang, X., Wang, Q.: HDR-NeRF: high dynamic range neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18398–18408 (2022)

    Google Scholar 

  11. Jambon, C., Kerbl, B., Kopanas, G., Diolatzis, S., Drettakis, G., Leimkühler, T.: NeRFshop: interactive editing of neural radiance fields. In: Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 6, no. 1 (2023)

    Google Scholar 

  12. Jiang, C., Sud, A., Makadia, A., Huang, J., Nießner, M., Funkhouser, T.: Local implicit grid representations for 3D scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6001–6010 (2020)

    Google Scholar 

  13. Jun-Seong, K., Yu-Ji, K., Ye-Bin, M., Oh, T.H.: HDR-Plenoxels: self-calibrating high dynamic range radiance fields. In: Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. ECCV 2022. LNCS, vol. 13692, pp. 384–401. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_23

  14. Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 386–402. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_23

    Chapter  Google Scholar 

  15. Kundu, A., et al.: Panoptic neural fields: a semantic object-aware neural scene representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12871–12881 (2022)

    Google Scholar 

  16. Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–129 (1977)

    Article  Google Scholar 

  17. Liao, Y., Donne, S., Geiger, A.: Deep marching cubes: learning explicit surface representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2916–2925 (2018)

    Google Scholar 

  18. Lijun, W., **aohui, S., Jianming, Z., Oliver, W., Chih-Yao, H.: DeepLens: shallow depth of field from a single image. ACM Trans. Graph. 37(6), 6 (2018)

    Google Scholar 

  19. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  20. Martin-Brualla, R., Radwan, N., Sajjadi, M.S., Barron, J.T., Dosovitskiy, A., Duckworth, D.: NeRF in the wild: neural radiance fields for unconstrained photo collections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7210–7219 (2021)

    Google Scholar 

  21. Moran, S., Marza, P., McDonagh, S., Parisot, S., Slabaugh, G.: DeepLPF: deep local parametric filters for image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12826–12835 (2020)

    Google Scholar 

  22. Qiu, J., Zhu, Y., Jiang, P.T., Cheng, M.M., Ren, B.: RDNeRF: relative depth guided NeRF for dense free view synthesis. Vis. Comput. 1–13 (2023)

    Google Scholar 

  23. Rudnev, V., Elgharib, M., Smith, W., Liu, L., Golyanik, V., Theobalt, C.: NeRF for outdoor scene relighting. In: Proceedings of the European Conference on Computer Vision, pp. 615–631 (2022)

    Google Scholar 

  24. Tucker, R., Snavely, N.: Single-view view synthesis with multiplane images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 551–560 (2020)

    Google Scholar 

  25. Wang, Y., Yang, S., Hu, Y., Zhang, J.: NeRFocus: neural radiance field for 3D synthetic defocus. ar**v preprint ar**v:2203.05189 (2022)

  26. Yang, G.W., Liu, Z.N,, Li, D.Y., et al.: JNeRF: an efficient heterogeneous NeRF model zoo based on Jittor[J]. Comput. Vis. Media 9(2), 401–404 (2023)

    Google Scholar 

  27. Yuan, Y.J., Sun, Y.T., Lai, Y.K., Ma, Y., Jia, R., Gao, L.: NeRF-editing: geometry editing of neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18353–18364 (2022)

    Google Scholar 

  28. Zhao, F., Yang, W., Zhang, J., Lin, P., Zhang, Y., Yu, J.: HumanNeRF: efficiently generated human radiance field from sparse inputs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7743–7753 (2022)

    Google Scholar 

Download references

Acknowledgements

This work is supported by Key Research and Development Program of Ningbo (No. 2023Z225).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuqi Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, C., Li, Y., Li, J., Wang, C., Dai, X. (2024). CaSE-NeRF: Camera Settings Editing of Neural Radiance Fields. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14496. Springer, Cham. https://doi.org/10.1007/978-3-031-50072-5_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50072-5_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50071-8

  • Online ISBN: 978-3-031-50072-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation