Light Field Reconstruction with Arbitrary Angular Resolution Using a Deep Coarse-To-Fine Framework

  • Conference paper
  • First Online:
Digital TV and Wireless Multimedia Communication (IFTC 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1390))

  • 1207 Accesses

Abstract

Densely-sampled light fields (LFs) are favorable for numerous applications like 3D scene reconstruction, virtual reality, et al., while the acquisition is costly. Most of the current view synthesis approaches need to sample the input LFs in a special or regular pattern, which makes the actual acquisition difficult. In this article, a new coarse-to-fine deep learning framework is presented to reconstruct densely-sampled LFs with arbitrary angular resolution. Concretely, a rough reconstruction based on meta-learning is performed on each epipolar plane image (EPI) to achieve arbitrary proportion of upsampling, followed by a refinement with 3D convolutional neural networks (CNNs) on stacked EPIs. Both modules are differentiable so that the network is end-to-end trainable. In addition, these two steps are performed on 3D volumes extracted from LF data first horizontally, and then vertically, forming a pseudo-4DCNN which can synthesize 4D LFs from a group of sparse input views effectively. The key advantage is to efficiently synthesize LFs with arbitrary angular resolution using a single model. The presented approach compares superiorly against various state-of-the-art methods on various challenging scenes.

This work is supported by the National Natural Science Foundation of China under Grant Nos. 62001432 and 61971383, the CETC funding, and the Fundamental Research Funds for the Central Universities under Grant No. YLSZ180226.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Levoy, M., Hanrahan, P.: Light field rendering. In: Computer Graphics (1996)

    Google Scholar 

  2. Ihrke, I., et al.: Principles of light field imaging: briefly revisiting 25 years of research. IEEE Signal Process. Mag. 33(5), 59–69 (2016)

    Article  Google Scholar 

  3. Fiss, J., et al.: Refocusing plenoptic images using depth-adaptive splatting. In: ICCP. IEEE (2014)

    Google Scholar 

  4. Kim, C., et al.: Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32(4), 1–12 (2013)

    MATH  Google Scholar 

  5. Chen, J., et al.: Accurate light field depth estimation with superpixel regularization over partially occluded regions. IEEE Trans. Image Process. 27(10), 4889–4900 (2018)

    Article  MathSciNet  Google Scholar 

  6. Huang, F.C., et al.: The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Trans. Graph. 34(4), 60 (2015)

    Google Scholar 

  7. Chai, J., Tong, X., Chan, S., et al.: Plenoptic sampling. In: Proceedings of ACM SIGGRAPH 2000 (2000)

    Google Scholar 

  8. Wilburn, B., et al.: High performance imaging using large camera arrays. ACM Trans. Graph. 24(3), 765–776 (2005)

    Google Scholar 

  9. Gortler, S.J., Grzeszczuk, R., Szeliski, R., et al.: The lumigraph. In: Association for Computing Machinery SIGGRAPH Computer Graphics, p. 96 (2001)

    Google Scholar 

  10. Davis, A., et al.: Unstructured light fields. In: Computer Graphics Forum. Wiley (2012)

    Google Scholar 

  11. Lytro illum. https://lightfield-forum.com/lytro/

  12. Raytrix. https://lightfield-forum.com/raytrix/

  13. Penner, E., et al.: Soft 3D reconstruction for view synthesis. ACM Trans. Graph. 36(6), 235.1–235.11 (2017)

    Google Scholar 

  14. Flynn, J., et al.: Deep stereo: learning to predict new views from the world's imagery. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2016)

    Google Scholar 

  15. Marwah, K., et al.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. (TOG) 32(4), 46.1–46.12 (2013)

    Google Scholar 

  16. Shi, L., et al.: Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34(1), 1–13 (2014)

    Google Scholar 

  17. Vagharshakyan, S., et al.: Light field reconstruction using Shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. PP(1), 133–147 (2018)

    Google Scholar 

  18. Vagharshakyan, S., Bregovic, R., Gotchev, A.: Light field reconstruction using Shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. 40(1), 133–147 (2018)

    Article  Google Scholar 

  19. Kalantari, N.K., et al.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35(6), 193 (2016)

    Google Scholar 

  20. Wu, G., et al.: Light Field Reconstruction Using Convolutional Network on EPI and Extended Applications. IEEE Trans. Pattern Anal. Mach. Intell. 41, 1681–1694 (2018)

    Article  Google Scholar 

  21. Wu, G., et al.: Learning sheared EPI structure for light field reconstruction. IEEE Trans. Image Process. 28, 3261–3273 (2019)

    Article  MathSciNet  Google Scholar 

  22. Yoon, Y., et al.: Learning a deep convolutional network for light-field image super-resolution. In: ICCV. IEEE (2015)

    Google Scholar 

  23. Hu, X., et al.: Meta-SR: a magnification-arbitrary network for super-resolution. In: 2019 IEEE/CVF CVPR (2020)

    Google Scholar 

  24. Chaurasia, G., et al.: Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graph. 32(3), 1–2 (2013)

    Article  Google Scholar 

  25. Srinivasan, P.P., et al.: Learning to synthesize a 4D RGBD light field from a single image. In: ICCV. IEEE (2017)

    Google Scholar 

  26. Zhou, T., et al.: Stereo magnification: learning view synthesis using multiplane images. ACM Trans. Graph., 1–12 (2018)

    Google Scholar 

  27. Mildenhall, B., et al.: Local light field fusion. ACM Trans. Graph., 1–14 (2019)

    Google Scholar 

  28. Lin, Z., et al.: A geometric analysis of light field rendering: special issue on research at microsoft corporation. Int. J. Comput. Vis. 58(2), 121–138 (2004). https://doi.org/10.1023/B:VISI.0000015916.91741.27

  29. Levin, A., et al.: Linear view synthesis using a dimensionality gap light field prior. In: CVPR. IEEE (2010)

    Google Scholar 

  30. Wu, G., et al.: Lapepi-Net: a Laplacian pyramid EPI structure for learning-based dense light field reconstruction (2019)

    Google Scholar 

  31. Yoon, Y., Jeon, Hae-Gon., Yoo, D., Lee, Joon-Young., Kweon, I.: Light-field image super-resolution using convolutional neural network. IEEE Signal Process. Lett. 24(6), 848–852 (2017)

    Article  Google Scholar 

  32. Wang, Y., Liu, F., Wang, Z., Hou, G., Sun, Z., Tan, T.: End-to-end view synthesis for light field imaging with pseudo 4DCNN. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 340–355. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_21

    Chapter  Google Scholar 

  33. Wang, Y., et al.: High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN. IEEE Trans. Comput. Imaging 6, 830–842 (2020)

    Article  Google Scholar 

  34. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: CVPR. IEEE (2016)

    Google Scholar 

  35. Raj, A.S., et al.: Stanford lytro light field archive. https://lightfields.stanford.edu/

  36. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54187-7_2

    Chapter  Google Scholar 

  37. Wanner, S., et al.: Datasets and benchmarks for densely sampled 4D light fields. The Eurographics Association (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Fang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, R., Fang, L., Ye, L., Zhong, W., Zhang, Q. (2021). Light Field Reconstruction with Arbitrary Angular Resolution Using a Deep Coarse-To-Fine Framework. In: Zhai, G., Zhou, J., Yang, H., An, P., Yang, X. (eds) Digital TV and Wireless Multimedia Communication. IFTC 2020. Communications in Computer and Information Science, vol 1390. Springer, Singapore. https://doi.org/10.1007/978-981-16-1194-0_34

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-1194-0_34

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-1193-3

  • Online ISBN: 978-981-16-1194-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation