Log in

SMigraPH: a perceptually retained method for passive haptics-based migration of MR indoor scenes

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

To enhance users’ immersion in the mixed reality (MR) cross-scene environment, it is imperative to make geometric modifications to arbitrary multi-scale virtual scenes, including adjustments to layout and size, based on the appearance of diverse real-world spaces. Numerous studies have been conducted on the layout arrangement of pure virtual scenes; however, they often neglect the issue of incongruity between virtual and real environments. Our objective is to mitigate the incongruity between virtual and real scenes in MR, establish a rational layout and size for any virtual scene within an enclosed indoor environment, and leverage tangible real objects to achieve multi-class passive haptic feedback. To achieve these goals, we propose SMigraPH, a perceptually retained indoor scene migration method with passive haptics in MR. Firstly, we propose a scene abstraction technique for constructing mathematical representations of both virtual and real scenes, capturing geometric information and topological relationships, while providing a map** strategy from the virtual to the real domain. Subsequently, we develop an optimization framework called v2rSA that integrates rationality, relationship preservation, haptic reuse, and scale fitting constraints in order to iteratively generate final layouts for virtual scenes. Finally, we render scenarios on optical see-through MR head-mounted displays (HMDs) to enable users to engage in realistic scene exploration and interaction with haptic feedback. We have conducted experiments and a user study on our proposed method, which demonstrates significant improvements in surface registration accuracy, haptic interaction efficiency, and fidelity compared to the state-of-the-art indoor scene layout arrangement method MakeItHome as well as the random placement approach RandomIn. The results of our approach closely resemble those achieved through manual placement using the Human method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

The data in this study will be made available upon reasonable request to the corresponding author.

Notes

  1. For instance, if a participant spends 120 s interacting with 20 out of 24 objects, then their final time consumption would amount to \(120/(20/24)=144(s)\).

References

  1. Ali, W., Abdelkarim, S., Zidan, M. et al.: Yolo3d: End-to-end real-time 3d oriented object bounding box detection from lidar point cloud. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (2018)

  2. Azmandian, M., Hancock, M., Benko, H., et al.: Haptic retargeting: dynamic repurposing of passive haptics for enhanced virtual reality experiences. In Proceedings of the 2016 chi conference on human factors in computing systems, pp. 1968–1979 (2016)

  3. Bermejo, C., Hui, P.: A survey on haptic technologies for mobile augmented reality. ACM Comput. Surv. (CSUR) 54(9), 1–35 (2021)

    Article  Google Scholar 

  4. Butt, M.A., Maragos, P.: Optimum design of chamfer distance transforms. IEEE Trans. Image Process. 7(10), 1477–1484 (1998)

    Article  Google Scholar 

  5. Chib, S., Greenberg, E.: Understanding the metropolis-hastings algorithm. Am. Stat. 49(4), 327–335 (1995)

    Google Scholar 

  6. Dai, A., Chang, A.X., Savva, M., et al.: Scannet: richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5828–5839 (2017)

  7. Dong, K., Gao, S., **n, S., et al.: Probability driven approach for point cloud registration of indoor scene. Vis. Comput. 1–13 (2022)

  8. Dong, Z.C., Wu, W., Xu, Z., et al.: Tailored reality: perception-aware scene restructuring for adaptive VR navigation. ACM Trans. Graph. (TOG) 40(5), 1–15 (2021)

    Article  Google Scholar 

  9. Du, K.L., Swamy, M.: Simulated annealing. In Search and Optimization by Metaheuristics, pp. 29–36. Springer (2016)

  10. Fisher, M., Ritchie, D., Savva, M., et al.: Example-based synthesis of 3d object arrangements. ACM Trans. Graph. (TOG) 31(6), 1–11 (2012)

    Article  Google Scholar 

  11. Geyer, C.J.: Practical markov chain monte carlo. Stat Sci. 473–483 (1992)

  12. Gwak, J., Choy, C., Savarese, S.: Generative sparse detection networks for 3d single-shot object detection. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pp. 297–313. Springer (2020)

  13. Insko, B.E.: Passive haptics significantly enhances virtual environments. The University of North Carolina at Chapel Hill (2001)

  14. Jang, S., Kim, L.H., Tanner, K., et al.: Haptic edge display for mobile tactile interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 3706–3716 (2016)

  15. **, S., Lee, S.H.: Lighting layout optimization for 3d indoor scenes. In Computer Graphics Forum, Wiley Online Library, pp. 733–743 (2019)

  16. Kermani, Z.S., Liao, Z., Tan, P., et al.: Learning 3d scene synthesis from annotated RGB-D images. Comput. Graph. Forum 35(5), 197–206 (2016)

    Article  Google Scholar 

  17. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. (2016) ar**v preprint ar**v:1609.02907

  18. Lari, Z., Habib, A., Kwak, E.: An adaptive approach for segmentation of 3d laser point cloud. In ISPRS Workshop Laser Scanning, pp. 29–31 (2011)

  19. Lee, W., Foam, J.P.A.: A tangible augmented reality for product design. In ISMAR (The 4th IEEE and ACM International Symposium on Mixed and Augmented Reality), pp. 106–109 (2005)

  20. Li, M., Patil, A.G., Xu, K., et al.: Grains: generative recursive autoencoders for indoor scenes. ACM Trans. Graph. (TOG) 38(2), 1–16 (2019)

    Article  Google Scholar 

  21. Liu, J., Li, Y., Goel, M.: A semantic-based approach to digital content placement for immersive environments. Vis. Comput. 1–15 (2022)

  22. Liu, Z., Zhang, Z., Cao, Y., et al.: Group-free 3d object detection via transformers. (2021) ar**v preprint ar**v:2104.00678

  23. Matthews, B.J., Thomas, B.H., Von Itzstein, S., et al.: Remapped physical-virtual interfaces with bimanual haptic retargeting. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), IEEE, pp. 19–27 (2019)

  24. Merrell, P., Schkufza, E., Li, Z., et al.: Interactive furniture layout using interior design guidelines. ACM Trans. Graph. (TOG) 30(4), 1–10 (2011)

    Article  Google Scholar 

  25. Mescheder, L., Oechsle, M., Niemeyer, M., et al.: Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4460–4470 (2019)

  26. Pearson, K.: The problem of the random walk. Nature 72(1865), 294–294 (1905)

    Article  Google Scholar 

  27. Qi, C.R., Liu, W., Wu, C., et al.: Frustum pointnets for 3d object detection from RGB-D data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927 (2018a)

  28. Qi, S., Zhu, Y., Huang, S., et al.: Human-centric indoor scene synthesis using stochastic grammar. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5899–5908 (2018b)

  29. Rinne, H.: The Weibull Distribution: A Handbook. Chapman and Hall/CRC, Boca Raton (2008)

    Book  Google Scholar 

  30. Rusu, R.B., Cousins, S.: 3d is here: point cloud library (pcl). In 2011 IEEE International Conference on Robotics and Automation, IEEE, pp. 1–4 (2011)

  31. Salazar, S.V., Pacchierotti, C., de Tinguy, X., et al.: Altering the stiffness, friction, and shape perception of tangible objects in virtual reality using wearable haptics. IEEE Trans. Haptics 13(1), 167–174 (2020)

    Article  Google Scholar 

  32. Schnabel, R., Wahl, R., Klein, R.: Efficient ransac for point-cloud shape detection. In Computer Graphics Forum, Wiley Online Library, pp. 214–226 (2007)

  33. Song, Y., Shen, W., Peng, K.: A novel partial point cloud registration method based on graph attention network. Vis. Comput. 39(3), 1109–1120 (2023)

    Article  Google Scholar 

  34. Spelmezan, D., González, R.M., Subramanian, S.: Skinhaptics: ultrasound focused in the hand creates tactile sensations. In 2016 IEEE Haptics Symposium (HAPTICS), IEEE, pp. 98–105 (2016)

  35. Sun, Y., Miao, Y., Chen, J., et al.: Pgcnet: patch graph convolutional network for point cloud segmentation of indoor scenes. Vis. Comput. 36, 2407–2418 (2020)

    Article  Google Scholar 

  36. Talton, J.O., Lou, Y., Lesser, S., et al.: Metropolis procedural modeling. ACM Trans. Graph. (TOG) 30(2), 1–14 (2011)

    Article  Google Scholar 

  37. Tang, K., Chen, Y., Peng, W., et al.: Reppvconv: attentively fusing reparameterized voxel features for efficient 3d point cloud perception. Vis. Comput. 1–12 (2022)

  38. Ungureanu, D., Bogo, F., Galliani, S., et al.: Hololens 2 research mode as a tool for computer vision research. (2020) ar**v preprint ar**v:2008.11239

  39. Wang, K., Savva, M., Chang, A.X., et al.: Deep convolutional priors for indoor scene synthesis. ACM Trans. Graph. (TOG) 37(4), 1–14 (2018)

    Google Scholar 

  40. Wang, K., Lin, Y.A., Weissmann, B., et al.: Planit: planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Trans. Graph. (TOG) 38(4), 1–15 (2019)

    Article  Google Scholar 

  41. Wang, L., Zhao, Z., Yang, X., et al.: A constrained path redirection for passive haptics. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), IEEE, pp. 650–651 (2020)

  42. Xu, K., Stewart, J., Fiume, E.: Constraint-based automatic placement for scene composition. In Graphics Interface, pp. 25–34 (2002)

  43. Yeh, Y.T., Yang, L., Watson, M., et al.: Synthesizing open worlds with constraints using locally annealed reversible jump mcmc. ACM Trans. Graph. (TOG) 31(4), 1–11 (2012)

    Article  Google Scholar 

  44. Yu, L.F., Yeung, S.K., Tang, C.K., et al.: Make it home: automatic optimization of furniture arrangement. ACM Trans. Graph. (TOG) Proc. ACM SIGGRAPH 30(4), 86 (2011)

  45. Zenner, A., Krüger, A.: Shifty: a weight-shifting dynamic passive haptic proxy to enhance object perception in virtual reality. IEEE Trans. Visual Comput. Graph. 23(4), 1285–1294 (2017)

    Article  Google Scholar 

  46. Zhang, S.H., Zhang, S.K., Liang, Y., et al.: A survey of 3d indoor scene synthesis. J. Comput. Sci. Technol. 34(3), 594–608 (2019)

    Article  Google Scholar 

  47. Zhou, B,. Lapedriza, A., **ao, J., et al.: Learning deep features for scene recognition using places database (2014)

Download references

Acknowledgements

We sincerely thank the reviewers for their constructive suggestions and comments. This work is supported by the National Natural Science Foundation of China through Project 61932003, by Bei**g Science and Technology Plan Project Z221100007722004, and by National Key R &D plan 2019YFC1521102.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lili Wang.

Ethics declarations

Conflict of interest

The authors declare no competing interests that are relevant to the content of this article, and there are no potential conflicts of interest, whether financial or non-financial. The funders did not have any involvement in the study or manuscript preparation.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, Q., Wang, L., Ke, W. et al. SMigraPH: a perceptually retained method for passive haptics-based migration of MR indoor scenes. Vis Comput (2023). https://doi.org/10.1007/s00371-023-03220-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-023-03220-2

Keywords

Navigation