Log in

Neural radiance fields-based multi-view endoscopic scene reconstruction for surgical simulation

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

In virtual surgery, the appearance of 3D models constructed from CT images lacks realism, leading to potential misunderstandings among residents. Therefore, it is crucial to reconstruct realistic endoscopic scene using multi-view images captured by an endoscope.

Methods

We propose an Endoscope-NeRF network for implicit radiance fields reconstruction of endoscopic scene under non-fixed light source, and synthesize novel views using volume rendering. Endoscope-NeRF network with multiple MLP networks and a ray transformer network represents endoscopic scene as implicit field function with color and volume density at continuous 5D vectors (3D position and 2D direction). The final synthesized image is obtained by aggregating all sampling points on each ray of the target camera using volume rendering. Our method considers the effect of distance from the light source to the sampling point on the scene radiance.

Results

Our network is validated on the lung, liver, kidney and heart of pig collected by our device. The results show that the novel views of endoscopic scene synthesized by our method outperform existing methods (NeRF and IBRNet) in terms of PSNR, SSIM, and LPIPS metrics.

Conclusion

Our network can effectively learn a radiance field function with generalization ability. Fine-tuning the pre-trained model on a new endoscopic scene to further optimize the neural radiance fields of the scene, which can provide more realistic, high-resolution rendered images for surgical simulation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Jensen K, Bjerrum F, Hansen HJ, Petersen RH, Pedersen JH, Konge L (2017) Using virtual reality simulation to assess competence in video-assisted thoracoscopic surgery (vats) lobectomy. Surg Endosc 31(6):2520–2528

    Article  PubMed  Google Scholar 

  2. Chan S, Shum H-Y, Ng K-T (2007) Image-based rendering and synthesis. IEEE Signal Process Mag 24(6):22–33

    Article  Google Scholar 

  3. Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R (2021) Nerf: representing scenes as neural radiance fields for view synthesis. Commun ACM 65(1):99–106

    Article  Google Scholar 

  4. Zhang K, Riegler G, Snavely N, Koltun VN (2020) Analyzing and improving neural radiance fields. Adv Neural Inf Process Syst

  5. Chen A, Xu Z, Zhao F, Zhang X, **ang F, Yu J, Su H (2021) Mvsnerf: fast generalizable radiance field reconstruction from multi-view stereo. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 14124–14133

  6. Barron JT, Mildenhall B, Verbin D, Srinivasan PP, Hedman P (2022) Mip-nerf 360: unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5470–5479

  7. Müller T, Evans A, Schied C, Keller A (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Gr 41(4):1–15

    Article  Google Scholar 

  8. Wang Q, Wang Z, Genova K, Srinivasan PP, Zhou H, Barron JT, Martin R, Snavely N, Funkhouser T (2021) Ibrnet: learning multi-view image-based rendering. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4690–4699

  9. Garbin SJ, Kowalski M, Johnson M, Shotton J, Valentin J (2021) Fastnerf: high-fidelity neural rendering at 200fps. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 14346–14355

  10. Yu A, Ye V, Tancik M, Kanazawa A (2021) pixelnerf: neural radiance fields from one or few images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4578–4587

  11. Xu Q, Xu Z, Philip J, Bi S, Shu Z, Sunkavalli K, Neumann U (2022) Point-nerf: point-based neural radiance fields. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5438–5448

  12. Drebin RA, Carpenter L, Hanrahan P (1988) Volume rendering. ACM Siggraph Comput Gr 22(4):65–74

    Article  Google Scholar 

  13. Levoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the 23rd annual conference on computer graphics and interactive techniques, pp 31–42

  14. Penner E, Zhang L (2017) Soft 3d reconstruction for view synthesis. ACM Trans Gr 36(6):1–11

    Article  Google Scholar 

  15. Chlubna T, Milet T, Zemčík P (2021) Real-time per-pixel focusing method for light field rendering. Comput Vis Med 7(3):319–333

    Article  Google Scholar 

  16. Yao Y, Luo Z, Li S, Fang T, Quan L (2018) Mvsnet: depth inference for unstructured multi-view stereo. In: Proceedings of the European conference on computer vision (ECCV), pp 767–783

  17. Zhang K, Luan F, Li Z, Snavely N (2022) Iron: inverse rendering by optimizing neural sdfs and materials from photometric images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5565–5574

  18. **angli Y, Xu L, Pan X, Zhao N, Rao A, Theobalt C, Dai B, Lin D (2022) Bungeenerf: Progressive neural radiance field for extreme multi-scale scene rendering. In: European conference on computer vision. Springer, pp 106–122

  19. Kajiya JT (1986) The rendering equation. In: Proceedings of the 13th annual conference on computer graphics and interactive techniques, pp 143–150

  20. Moreno I, Viveros-Méndez P (2021) Modeling the irradiation pattern of leds at short distances. Opt Express 29(5):6845–6853

    Article  PubMed  Google Scholar 

  21. Zhu J, Zhao S, Xu Y, Meng X, Wang L, Yan L-Q (2022) Recent advances in glinty appearance rendering. Comput Vis Med, pp 1–18

  22. Mildenhall B, Srinivasan PP, Ortiz-Cayon R, Kalantari NK, Ramamoorthi R, Ng R, Kar A (2019) Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans Gr 38(4):1–14

    Article  Google Scholar 

  23. Schonberger JL, Frahm J-M (2016) Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4104–4113

  24. Dabov K, Foi A, Katkovnik V, Egiazarian K (2007) Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans Image Process 16(8):2080–2095

    Article  PubMed  Google Scholar 

  25. Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, Glocker B, Rueckert D (2018) Attention u-net: learning where to look for the pancreas. In: Medical imaging with deep learning

  26. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  27. Hirschberg J, Manning CD (2015) Advances in natural language processing. Science 349(6245):261–266

    Article  CAS  PubMed  Google Scholar 

  28. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30

  29. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. CoRR ar**v:1412.6980

Download references

Funding

This work was supported by the National Natural Science Foundation of China (62365017, 62062069, 62062070, 62005235), Yunnan Outstanding Youth Fund (202301AW070001) and the Yunnan Provincial Department of Education Science Research Fund Project (2021Y494).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yonghang Tai.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

This article does not contain patient data.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (mp4 21020 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qin, Z., Qian, K., Liang, S. et al. Neural radiance fields-based multi-view endoscopic scene reconstruction for surgical simulation. Int J CARS 19, 951–960 (2024). https://doi.org/10.1007/s11548-024-03080-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-024-03080-8

Keywords

Navigation