How Indirect and Direct Interaction Affect the Trustworthiness in Normal and Explainable Human-Robot Interaction

  • Conference paper
  • First Online:
Smart Technologies for a Sustainable Future (STE 2024)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 1028))

Included in the following conference series:

  • 26 Accesses

Abstract

Human-robot interaction (HRI) attracts significant attention from the public due to the ubiquity of robots in factories, restaurants, and even at home. However, the engagement of users in interacting with the robot is still a question mark due to the challenging trustworthiness. The trustworthiness becomes more complicated when discussing indirect interaction – humans observe the robot – and direct interaction – humans and robots may interact or not interact when being close to each other – in the robotic design. Several studies were conducted to analyze human trust in either indirect or direct aspects of robotic systems. The shortage of benchmarking indirect interaction and direct interaction initiates a significant gap in designing and develo** a more subtle robotic system in complex scenarios that involve different stakeholders, such as users and observers, known as indirect users. In this study, we propose a novel guideline for evaluating such robotic systems in human-robot interaction. Particularly, we analyze differences between indirect and direct interaction about human trustworthiness in HRI. In addition, we also investigate the simulation methodology including virtual reality and video to evaluate a human-robot interaction scenario in both normal and explainable robotic systems by integrating a visual feedback module. By conducting quantitative and qualitative experiments, there is no significant difference between indirect and direct interaction in the trustworthiness of HRI. Instead, the explainable feature is recognized as the key factor in improving the trustworthiness of a robotic system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Explainable AI. https://en.wikipedia.org/wiki/Explainable_artificial_intelligence. Accessed 29 Nov 2023

  2. Reachy humanoid robot. https://www.pollen-robotics.com/. Accessed 11 Sept 2023

  3. Valve index headset. https://store.steampowered.com/valveindex. Accessed 23 Nov 2023

  4. Andras, P., et al.: Trusting intelligent machines: deepening trust within socio-technical systems. IEEE Technol. Soc. Mag. 37(4), 76–83 (2018)

    Article  Google Scholar 

  5. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  6. Breton, L., Hughes, P., Barker, S., Pilling, M., Fuente, L., Crook, N.: The impact of leader-follower robot collaboration strategies on perceived safety and intelligence (2016)

    Google Scholar 

  7. gRPC: gRPC: a high performance open-source universal RPC framework (2020)

    Google Scholar 

  8. Guo, Y., Zhang, C., Yang, X.J.: Modeling trust dynamics in human-robot teaming: a Bayesian inference approach. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–7 (2020)

    Google Scholar 

  9. Jayaraman, S.K., et al.: Trust in AV: an uncertainty reduction model of AV-pedestrian interactions. In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 133–134 (2018)

    Google Scholar 

  10. Kaber, D.B., Riley, J.M., Zhou, R., Draper, J.: Effects of visual interface design, and control mode and latency on performance, telepresence and workload in a teleoperation task. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 44, pp. 503–506. SAGE Publications, Los Angeles (2000)

    Google Scholar 

  11. Kaiser, J.N., Marianski, T., Muras, M., Chamunorwa, M.: Popup observation kit for remote usability testing. In: Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia, pp. 233–235 (2021)

    Google Scholar 

  12. Kolve, E., et al.: AI2-THOR: an interactive 3D environment for visual AI. ar**v preprint ar**v:1712.05474 (2017)

  13. Krenn, B., et al.: It’s your turn!–A collaborative human-robot pick-and-place scenario in a virtual industrial setting. ar**v preprint ar**v:2105.13838 (2021)

  14. Lee, J., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10), 1243–1270 (1992)

    Article  Google Scholar 

  15. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  Google Scholar 

  16. Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part V. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  17. Luhmann, N.: Trust and Power. Wiley, Hoboken (2018)

    Google Scholar 

  18. MacKenzie, I.S., Ware, C.: Lag as a determinant of human performance in interactive systems. In: Proceedings of the INTERACT 1993 and CHI 1993 Conference on Human Factors in Computing Systems, pp. 488–493 (1993)

    Google Scholar 

  19. Mäkelä, S., Bednarik, R., Tukiainen, M.: Evaluating user experience of autistic children through video observation. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 463–468 (2013)

    Google Scholar 

  20. Muir, B.M.: Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11), 1905–1922 (1994)

    Article  Google Scholar 

  21. Savva, M., et al.: Habitat: a platform for embodied AI research. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9339–9347 (2019)

    Google Scholar 

  22. Shen, B., et al.: iGibson 1.0: a simulation environment for interactive tasks in large realistic scenes. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7520–7527. IEEE (2021)

    Google Scholar 

  23. Siegrist, M.: Trust and risk perception: a critical review of the literature. Risk Anal. 41(3), 480–490 (2021)

    Article  Google Scholar 

  24. Tian, L., He, K., Xu, S., Cosgun, A., Kulic, D.: Crafting with a robot assistant: use social cues to inform adaptive handovers in human-robot collaboration. In: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, pp. 252–260 (2023)

    Google Scholar 

  25. Wang, C., Belardinelli, A.: Investigating explainable human-robot interaction with augmented reality. In: 5th International Workshop on Virtual, Augmented, and Mixed Reality for HRI (2022)

    Google Scholar 

  26. Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7464–7475 (2023)

    Google Scholar 

  27. Yanai, K., et al.: Evaluating human-robot interaction from inside and outside comparison between the first-person and the third-party perspectives (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leonardo Espinosa-Leal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pham, T.A., Espinosa-Leal, L. (2024). How Indirect and Direct Interaction Affect the Trustworthiness in Normal and Explainable Human-Robot Interaction. In: Auer, M.E., Langmann, R., May, D., Roos, K. (eds) Smart Technologies for a Sustainable Future. STE 2024. Lecture Notes in Networks and Systems, vol 1028. Springer, Cham. https://doi.org/10.1007/978-3-031-61905-2_40

Download citation

Publish with us

Policies and ethics

Navigation