Neural Strands: Learning Hair Geometry and Appearance from Multi-view Images

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13693))

Included in the following conference series:

Abstract

We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs. The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects. Our model achieves intuitive shape and style control unlike volumetric counterparts. To enable these properties, we propose a novel hair representation based on a neural scalp texture that encodes the geometry and appearance of individual strands at each texel location. Furthermore, we introduce a novel neural rendering framework based on rasterization of the learned hair strands. Our neural rendering is strand-accurate and anti-aliased, making the rendering view-consistent and photorealistic. Combining appearance with a multi-view geometric prior, we enable, for the first time, the joint learning of appearance and explicit hair geometry from a multi-view setup. We demonstrate the efficacy of our approach in terms of fidelity and efficiency for various hairstyles.

R. A. Rosu—Work done during an internship at Reality Labs Research, Pittsburgh, PA, USA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aliev, K.-A., Sevastopolsky, A., Kolos, M., Ulyanov, D., Lempitsky, V.: Neural point-based graphics. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12367, pp. 696–712. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58542-6_42

    Chapter  Google Scholar 

  2. Bagautdinov, T., et al.: Driving-signal aware full-body avatars. ACM Trans. Graph. (TOG) 40(4), 1–17 (2021)

    Article  Google Scholar 

  3. Beeler, T., et al.: Coupled 3D reconstruction of sparse facial hair and skin. ACM Trans. Graph. (ToG) 31(4), 117 (2012)

    Article  Google Scholar 

  4. Benamira, A., Pattanaik, S.: A combined scattering and diffraction model for elliptical hair rendering. In: Computer Graphics Forum, vol. 40, pp. 163–175. Wiley Online Library (2021)

    Google Scholar 

  5. Chai, M., Luo, L., Sunkavalli, K., Carr, N., Hadap, S., Zhou, K.: High-quality hair modeling from a single portrait photo. ACM Trans. Graph. (TOG) 34(6), 204 (2015)

    Article  Google Scholar 

  6. Chai, M., Ren, J., Tulyakov, S.: Neural hair rendering. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12363, pp. 371–388. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58523-5_22

    Chapter  Google Scholar 

  7. Chai, M., Shao, T., Wu, H., Weng, Y., Zhou, K.: AutoHair: fully automatic hair modeling from a single image. ACM Trans. Graph. 35(4), 1–12 (2016)

    Google Scholar 

  8. Chai, M., Wang, L., Weng, Y., **, X., Zhou, K.: Dynamic hair manipulation in images and videos. ACM Trans. Graph. (TOG) 32(4), 75 (2013)

    Article  MATH  Google Scholar 

  9. Chai, M., Wang, L., Weng, Y., Yu, Y., Guo, B., Zhou, K.: Single-view hair modeling for portrait manipulation. ACM Trans. Graph. (TOG) 31(4), 116 (2012)

    Article  Google Scholar 

  10. Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.: Neural ordinary differential equations. ar**v preprint ar**v:1806.07366 (2018)

  11. Herrera, T.L., Zinke, A., Weber, A.: Lighting hair from the inside: a thermal approach to hair reconstruction. ACM Trans. Graph. (TOG) 31(6), 146 (2012)

    Article  Google Scholar 

  12. Hu, L., Bradley, D., Li, H., Beeler, T.: Simulation-ready hair capture. In: Computer Graphics Forum, vol. 36, pp. 281–294. Wiley Online Library (2017)

    Google Scholar 

  13. Hu, L., Ma, C., Luo, L., Li, H.: Robust hair capture using simulated examples. ACM Trans. Graph. (TOG) 33(4), 126 (2014)

    Article  Google Scholar 

  14. Hu, L., Ma, C., Luo, L., Wei, L.Y., Li, H.: Capturing braided hairstyles. ACM Trans. Graph. (TOG) 33(6), 225 (2014)

    Article  Google Scholar 

  15. Jo, Y., Park, J.: SC-FEGAN: face editing generative adversarial network with user’s sketch and color. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1745–1753 (2019)

    Google Scholar 

  16. Karras, T., et al.: Alias-free generative adversarial networks. ar**v preprint ar**v:2106.12423 (2021)

  17. Khungurn, P., Marschner, S.: Azimuthal scattering from elliptical hair fibers. ACM Trans. Graph. (TOG) 36(2), 1–23 (2017)

    Article  Google Scholar 

  18. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. ar**v preprint ar**v:1312.6114 (2013)

  19. Li, T., Bolkart, T., Black, M.J., Li, H., Romero, J.: Learning a model of facial shape and expression from 4D scans. ACM Trans. Graph. 36(6), 1–194 (2017)

    Google Scholar 

  20. Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: a differentiable renderer for image-based 3D reasoning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7708–7717 (2019)

    Google Scholar 

  21. Lombardi, S., Saragih, J., Simon, T., Sheikh, Y.: Deep appearance models for face rendering. ACM Trans. Graph. 37(4), 1–13 (2018). https://doi.org/10.1145/3197517.3201401

  22. Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: Learning dynamic renderable volumes from images. ACM Trans. Graph. 38(4), 1–14 (2019). https://doi.org/10.1145/3306346.3323020

  23. Lombardi, S., Simon, T., Schwartz, G., Zollhoefer, M., Sheikh, Y., Saragih, J.: Mixture of volumetric primitives for efficient neural rendering. ACM Trans. Graph. 40(4), 1–13 (2021). https://doi.org/10.1145/3450626.3459863

  24. Luo, L., Li, H., Paris, S., Weise, T., Pauly, M., Rusinkiewicz, S.: Multi-view hair capture using orientation fields. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1490–1497 (2012). IEEE (2012)

    Google Scholar 

  25. Luo, L., Li, H., Rusinkiewicz, S.: Structure-aware hair capture. ACM Trans. Graph. (TOG) 32(4), 76 (2013)

    Article  MATH  Google Scholar 

  26. Mehta, I., Gharbi, M., Barnes, C., Shechtman, E., Ramamoorthi, R., Chandraker, M.: Modulated periodic activations for generalizable local functional representations. ar**v preprint ar**v:2104.03960 (2021)

  27. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  28. Nam, G., Wu, C., Kim, M.H., Sheikh, Y.: Strand-accurate multi-view hair capture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 155–164 (2019)

    Google Scholar 

  29. Nilsson, J., Akenine-Möller, T.: Understanding SSIM. CoRR abs/2006.13846 (2020)

    Google Scholar 

  30. Olszewski, K., et al.: Intuitive, interactive beard and hair synthesis with generative models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7446–7456 (2020)

    Google Scholar 

  31. Park, K., et al.: Nerfies: deformable neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5865–5874 (2021)

    Google Scholar 

  32. Patel, Y., Appalaraju, S., Manmatha, R.: Deep perceptual compression. CoRR abs/1907.08310 (2019). arxiv.org/abs/1907.08310

  33. Qiu, H., Wang, C., Zhu, H., Zhu, X., Gu, J., Han, X.: Two-phase hair image synthesis by self-enhancing generative model. In: Computer Graphics Forum, vol. 38, pp. 403–412. Wiley Online Library (2019)

    Google Scholar 

  34. Rückert, D., Franke, L., Stamminger, M.: ADOP: approximate differentiable one-pixel point rendering. ar**v preprint ar**v:2110.06635 (2021)

  35. Saito, S., Hu, L., Ma, C., Ibayashi, H., Luo, L., Li, H.: 3D hair synthesis using volumetric variational autoencoders. ACM Trans. Graph. (TOG) 37(6), 1–12 (2018)

    Article  Google Scholar 

  36. Saito, S., Huang, Z., Natsume, R., Morishima, S., Kanazawa, A., Li, H.: PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2304–2314 (2019)

    Google Scholar 

  37. Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: Advances in Neural Information Processing Systems 33 (2020)

    Google Scholar 

  38. Sun, T., Nam, G., Aliaga, C., Hery, C., Ramamoorthi, R.: Human hair inverse rendering using multi-view photometric data (2021)

    Google Scholar 

  39. Tan, Z., et al.: Michigan: multi-input-conditioned hair image generation for portrait editing. ar**v preprint ar**v:2010.16417 (2020)

  40. Tewari, A., et al.: State of the art on neural rendering. Computer Graphics Forum (EG STAR 2020) (2020)

    Google Scholar 

  41. Tewari, A., et al.: MoFA: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1274–1283 (2017)

    Google Scholar 

  42. Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: image synthesis using neural textures. ACM Trans. Graph. (TOG) 38(4), 1–12 (2019)

    Article  Google Scholar 

  43. Tran, L., Liu, X.: Nonlinear 3D face morphable model. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7346–7355 (2018)

    Google Scholar 

  44. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

  45. Wei, L., Hu, L., Kim, V., Yumer, E., Li, H.: Real-time hair rendering using sequential adversarial networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 105–122. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_7

    Chapter  Google Scholar 

  46. **ang, D., Prada, F., Wu, C., Hodgins, J.: MonoClothCap: towards temporally coherent clothing capture from monocular RGB video. In: 2020 International Conference on 3D Vision (3DV), pp. 322–332. IEEE (2020)

    Google Scholar 

  47. Yang, L., Shi, Z., Zheng, Y., Zhou, K.: Dynamic hair modeling from monocular videos using deep neural networks. ACM Trans. Graph. (TOG) 38(6), 1–12 (2019)

    Article  Google Scholar 

  48. Yifan, W., Serena, F., Wu, S., Öztireli, C., Sorkine-Hornung, O.: Differentiable surface splatting for point-based geometry processing. ACM Trans. Graph. (TOG) 38(6), 1–14 (2019)

    Article  Google Scholar 

  49. Zhang, M., Chai, M., Wu, H., Yang, H., Zhou, K.: A datadriven approach to four-view image-based hair modeling. ACM Trans. Graph 36(4), 156 (2017)

    Article  Google Scholar 

  50. Zhang, M., Zheng, Y.: Hair-GAN: recovering 3D hair structure from a single image using generative adversarial networks. Visual Inform. 3(2), 102–112 (2019)

    Article  Google Scholar 

  51. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 586–595 (2018)

    Google Scholar 

  52. Zhou, Y., et al.: HairNet: single-view hair reconstruction using convolutional neural networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 249–265. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_15

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giljoo Nam .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 2 (pdf 5146 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rosu, R.A., Saito, S., Wang, Z., Wu, C., Behnke, S., Nam, G. (2022). Neural Strands: Learning Hair Geometry and Appearance from Multi-view Images. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13693. Springer, Cham. https://doi.org/10.1007/978-3-031-19827-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19827-4_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19826-7

  • Online ISBN: 978-3-031-19827-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation