Abstract
Densely-sampled light fields (LFs) are favorable for numerous applications like 3D scene reconstruction, virtual reality, et al., while the acquisition is costly. Most of the current view synthesis approaches need to sample the input LFs in a special or regular pattern, which makes the actual acquisition difficult. In this article, a new coarse-to-fine deep learning framework is presented to reconstruct densely-sampled LFs with arbitrary angular resolution. Concretely, a rough reconstruction based on meta-learning is performed on each epipolar plane image (EPI) to achieve arbitrary proportion of upsampling, followed by a refinement with 3D convolutional neural networks (CNNs) on stacked EPIs. Both modules are differentiable so that the network is end-to-end trainable. In addition, these two steps are performed on 3D volumes extracted from LF data first horizontally, and then vertically, forming a pseudo-4DCNN which can synthesize 4D LFs from a group of sparse input views effectively. The key advantage is to efficiently synthesize LFs with arbitrary angular resolution using a single model. The presented approach compares superiorly against various state-of-the-art methods on various challenging scenes.
This work is supported by the National Natural Science Foundation of China under Grant Nos. 62001432 and 61971383, the CETC funding, and the Fundamental Research Funds for the Central Universities under Grant No. YLSZ180226.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Levoy, M., Hanrahan, P.: Light field rendering. In: Computer Graphics (1996)
Ihrke, I., et al.: Principles of light field imaging: briefly revisiting 25 years of research. IEEE Signal Process. Mag. 33(5), 59–69 (2016)
Fiss, J., et al.: Refocusing plenoptic images using depth-adaptive splatting. In: ICCP. IEEE (2014)
Kim, C., et al.: Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32(4), 1–12 (2013)
Chen, J., et al.: Accurate light field depth estimation with superpixel regularization over partially occluded regions. IEEE Trans. Image Process. 27(10), 4889–4900 (2018)
Huang, F.C., et al.: The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Trans. Graph. 34(4), 60 (2015)
Chai, J., Tong, X., Chan, S., et al.: Plenoptic sampling. In: Proceedings of ACM SIGGRAPH 2000 (2000)
Wilburn, B., et al.: High performance imaging using large camera arrays. ACM Trans. Graph. 24(3), 765–776 (2005)
Gortler, S.J., Grzeszczuk, R., Szeliski, R., et al.: The lumigraph. In: Association for Computing Machinery SIGGRAPH Computer Graphics, p. 96 (2001)
Davis, A., et al.: Unstructured light fields. In: Computer Graphics Forum. Wiley (2012)
Lytro illum. https://lightfield-forum.com/lytro/
Penner, E., et al.: Soft 3D reconstruction for view synthesis. ACM Trans. Graph. 36(6), 235.1–235.11 (2017)
Flynn, J., et al.: Deep stereo: learning to predict new views from the world's imagery. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2016)
Marwah, K., et al.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. (TOG) 32(4), 46.1–46.12 (2013)
Shi, L., et al.: Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34(1), 1–13 (2014)
Vagharshakyan, S., et al.: Light field reconstruction using Shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. PP(1), 133–147 (2018)
Vagharshakyan, S., Bregovic, R., Gotchev, A.: Light field reconstruction using Shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. 40(1), 133–147 (2018)
Kalantari, N.K., et al.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35(6), 193 (2016)
Wu, G., et al.: Light Field Reconstruction Using Convolutional Network on EPI and Extended Applications. IEEE Trans. Pattern Anal. Mach. Intell. 41, 1681–1694 (2018)
Wu, G., et al.: Learning sheared EPI structure for light field reconstruction. IEEE Trans. Image Process. 28, 3261–3273 (2019)
Yoon, Y., et al.: Learning a deep convolutional network for light-field image super-resolution. In: ICCV. IEEE (2015)
Hu, X., et al.: Meta-SR: a magnification-arbitrary network for super-resolution. In: 2019 IEEE/CVF CVPR (2020)
Chaurasia, G., et al.: Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graph. 32(3), 1–2 (2013)
Srinivasan, P.P., et al.: Learning to synthesize a 4D RGBD light field from a single image. In: ICCV. IEEE (2017)
Zhou, T., et al.: Stereo magnification: learning view synthesis using multiplane images. ACM Trans. Graph., 1–12 (2018)
Mildenhall, B., et al.: Local light field fusion. ACM Trans. Graph., 1–14 (2019)
Lin, Z., et al.: A geometric analysis of light field rendering: special issue on research at microsoft corporation. Int. J. Comput. Vis. 58(2), 121–138 (2004). https://doi.org/10.1023/B:VISI.0000015916.91741.27
Levin, A., et al.: Linear view synthesis using a dimensionality gap light field prior. In: CVPR. IEEE (2010)
Wu, G., et al.: Lapepi-Net: a Laplacian pyramid EPI structure for learning-based dense light field reconstruction (2019)
Yoon, Y., Jeon, Hae-Gon., Yoo, D., Lee, Joon-Young., Kweon, I.: Light-field image super-resolution using convolutional neural network. IEEE Signal Process. Lett. 24(6), 848–852 (2017)
Wang, Y., Liu, F., Wang, Z., Hou, G., Sun, Z., Tan, T.: End-to-end view synthesis for light field imaging with pseudo 4DCNN. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 340–355. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_21
Wang, Y., et al.: High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN. IEEE Trans. Comput. Imaging 6, 830–842 (2020)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: CVPR. IEEE (2016)
Raj, A.S., et al.: Stanford lytro light field archive. https://lightfields.stanford.edu/
Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54187-7_2
Wanner, S., et al.: Datasets and benchmarks for densely sampled 4D light fields. The Eurographics Association (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, R., Fang, L., Ye, L., Zhong, W., Zhang, Q. (2021). Light Field Reconstruction with Arbitrary Angular Resolution Using a Deep Coarse-To-Fine Framework. In: Zhai, G., Zhou, J., Yang, H., An, P., Yang, X. (eds) Digital TV and Wireless Multimedia Communication. IFTC 2020. Communications in Computer and Information Science, vol 1390. Springer, Singapore. https://doi.org/10.1007/978-981-16-1194-0_34
Download citation
DOI: https://doi.org/10.1007/978-981-16-1194-0_34
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-1193-3
Online ISBN: 978-981-16-1194-0
eBook Packages: Computer ScienceComputer Science (R0)