Binocular Structured Light Stereo Matching Approach for Dense Facial Disparity Map

  • Conference paper
AI 2011: Advances in Artificial Intelligence (AI 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7106))

Included in the following conference series:

Abstract

Binocular stereo vision technology shows a particular interesting for face recognition, in which the accurate stereo matching is the key issue for obtaining dense disparity map used for exploiting 3D shape information of object. This paper proposed a binocular structured light stereo matching approach to deal with the challenge of stereo matching to objects having large disparity and low texture, such as facial image. By introducing global system to coordinate the binocular camera and projector, a projector cast structured light pattern which added texture to the face scene. Binocular epipolar constraint and semi-global stereo matching algorithm were applied. The experiments showed that the accuracy had improved compared to that of purely binocular vision for getting dense facial disparity map.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Zhang, Y., Pralmeh, E.C., Sung, E.: Hierarchical modeling of a personalized face for realistic expression animation. In: Proceedings of the IEEE International Conference on Multimedia and Expo., pp. 457–460. IEEE, Lausanne (2002)

    Chapter  Google Scholar 

  2. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47(1), 7–42 (2002)

    Article  MATH  Google Scholar 

  3. Veksler, O.: Fast variable window for stereo correspondence using integral images. In: CVPR 2003, pp. I: 556–561 (2003)

    Google Scholar 

  4. Fusiello, A., Roberto, V., Trucco, E.: Efficeint stereo with multiple windowing. In: CVPR 1997, pp. 858–863 (1997)

    Google Scholar 

  5. Kim, J., Kolmogorov, V., Zabih, R.: Visual correspondence using energy minimization and mutual information. In: CVPR 2003, pp. 1033–1040 (2003)

    Google Scholar 

  6. Li, G., Zucker, S.W.: Surface geometric constraints for stereo in belief propagation. In: IEEE Computer Society Conference, vol. 2, pp. 2355–2362 (2006)

    Google Scholar 

  7. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70(1) (2006)

    Google Scholar 

  8. Kolmogorov, V., Zabih, R.: Computing Visual Correspondence with Occlusions using Graph Cuts. In: Proc. Int. Conf. Computer Vision (2001)

    Google Scholar 

  9. Cox, I.J., Hingorani, S.L., Rao, S.B., Maggs, B.M.: A maximum-likelihood stereo algorithm. CVIU 63(3), 542–567 (1996)

    Google Scholar 

  10. Denker, K., Umlauf, G.: Accurate Real-Time Multi-Camera Stereo-Matching on the GPU for 3D Reconstruction. Journal of WSCG 19(1), 9–16 (2011)

    Google Scholar 

  11. Alexander, W., Da, A., Georgy, G., Patrice, D.: A Comparison of Three 3-D Facial Reconstruction Approaches. In: IEEE International Conference on Multimedia and Expo, pp. 2057–2060 (2006)

    Google Scholar 

  12. Rongqian, Y., Sheng, C., Yazhu, C.: Flexible and accurate implementation of a binocular structured light system. Optics and Lasers in Engineering 46(5), 373–379 (2008)

    Article  Google Scholar 

  13. Blake, A., McCowen, D., Lo, H.R., Lindsey, P.J.: Trinocular active range-sensing. IEEE Trans. Pattern Anal. Mach. Intell. 15, 477–483 (1993)

    Article  Google Scholar 

  14. Koninckx, T., Gool, L.V.: Real-time range acquisition by adaptive structured light. IEEE Transac. on Pattern Analysis & Machine Intelligence 28, 432–445 (2006)

    Article  Google Scholar 

  15. Kosov, S., Thormählen, T., Seidel, H.-P.: Using Active Illumination for Accurate Variational Space-Time Stereo. In: Heyden, A., Kahl, F. (eds.) SCIA 2011. LNCS, vol. 6688, pp. 752–763. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  16. Veksler, O.: Fast variable window for stereo correspondence using integral images. In: CVPR, vol. I, pp. 556–561 (2003)

    Google Scholar 

  17. Hirschmuller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. (2), pp. 807–814 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ma, S., Shen, Y., Qian, J., Chen, H., Hao, Z., Yang, L. (2011). Binocular Structured Light Stereo Matching Approach for Dense Facial Disparity Map. In: Wang, D., Reynolds, M. (eds) AI 2011: Advances in Artificial Intelligence. AI 2011. Lecture Notes in Computer Science(), vol 7106. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25832-9_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-25832-9_56

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-25831-2

  • Online ISBN: 978-3-642-25832-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation