Abstract
To enhance an existing human-automation interaction (HAI) framework associated with a human performance modeling tool, an extensive meta-analysis was performed on performance impacts of automation transparency. The main goal of this analysis was to gain a better quantitative understanding of automation transparency impacts on dependent variables such as trust in automation, situation awareness (SA), response times, and accuracy. The collective wisdom of multiple investigations revealed clear quantitative benefits of transparency in HAI, with the combined average effect sizes for response times, accuracy, SA, dependence, and trust ranging between 0.45 and 1.06 in performance improving directions. Mental workload was not significantly impacted by automation transparency.
These key findings indicate a need to consider automation transparency when evaluating the possible effectiveness of HAI on human-automation team (HAT) performance. The results will feed improvements to the existing HAI modeling framework, including more detailed transparency benefits caused by different moderator variables. Two of these main effects include; 1) when minimum transparency is imposed (and compared against a control condition), its benefit to accuracy is significantly less than when the level of transparency is increased (such as by adding confidence data), and 2) accuracy improvements are mostly applicable to normal task performance, while response time improvements are more applicable to automation failure response tasks.
*The research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-21-2-0280. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Please note that references with an asterisk were included in the meta-analysis. Not all references included in the meta-analysis are explicitly called out in this paper.
* Antifakos, S., Kern, N., Schiele, B., Schwaninger, A.: Towards improving trust in context-aware systems by displaying system confidence. In: Proceedings of the 7th Conference on Human-Computer Interaction with Mobile Devices and Services, Austria, pp. 9–14 (2005)
* Bass, E.J., Baumgart, L.A., Shepley, K.K.: The effect of information analysis automation display content on human judgment performance in noisy environments. J. Cogn. Eng. Decis. Mak. 7, 49–65 (2013)
* Bean, N.H., Rice, S.C., Keller, M.D.: The effect of gestalt psychology on the system-wide trust strategy in automation. Proc. Hum. Factors Ergon. Soc. 55(1), 1417–1421 (2011)
* Beller, J., Heesen, M., Vollrath, M.: Improving the driver-automation interaction: an approach using automation uncertainty. Hum. Factors 55(6), 1130–1141 (2013)
Bhaskara, A., Skinner, M., Loft, S.: Agent transparency: a review of current theory and evidence. IEEE Trans. Hum.-Mach. Syst. 50(3), 215–224 (2020)
* Chen, T., Campbell, D., Gonzalez, L.F., Coppin, G.: Increasing autonomy transparency through capability communication in multiple heterogeneous UAV management. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, pp. 2434–2439 (2015)
Chiou, E.K., Lee, J.D.: Trusting automation: designing for responsivity and resilience. Hum. Factors 65(1), 137–165 (2023)
Cohen, J.: Statistical Power Analysis for the Behavioral Sciences, 2nd edn. Erlbaum (1988)
* Cramer, H., et al.: The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adapt. Interact. 18(5), 455–496 (2008)
* Detjen, H., Salini, M., Kronenberger, J., Geisler, S., Schneegass, S.: Towards transparent behavior of automated vehicles: design and evaluation of HUD concepts to support system predictability through motion intent communication. In: Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, vol. 19, pp. 1–12. Association for Computing Machinery, New York (2021)
* Dikmen, M., Li, Y., Ho, G., Farrell, P., Cao, S., Burns, C.: The burden of communication: effects of automation support and automation transparency on team performance. In: Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Canada, pp. 2227–2231 (2020)
* Dorneich, M.C., et al.: Interaction of automation visibility and information quality in flight deck information automation. IEEE Trans. Hum.-Mach. Syst. 47, 915–926 (2017)
* Du, N., et al.: Look who’s talking now: implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transp. Res. Part C Emerg. Technol. 104, 428–442 (2019)
* Forster, Y., Hergeth, S., Naujoks, F., Krems, J.F., Keinath, A.: What and how to tell beforehand: the effect of user education on understanding, interaction and satisfaction with driving automation. Transp. Res. Part F: Traffic Psychol. Behav. 68, 316–335 (2020)
* Göritzlehner, R., Borst, C., Ellerbroek, J., Westin, C., van Paassen, M.M., Mulder, M.: Effects of transparency on the acceptance of automated resolution advisories. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp 2965–2970 (2014)
* Guznov, S., et al.: Robot transparency and team orientation effects on human–robot teaming. Int. J. Hum.-Comput. Interact. 36(7), 650–660 (2020)
* He, D., Kanaan, D., Donmez, B.: In-vehicle displays to support driver anticipation of traffic conflicts in automated vehicles. Accid. Anal. Prev. 149, 105842 (2021)
* Helldin, T.: Transparency for future semi-automated systems: effects of transparency on operator performance, workload and trust (ISBN 978-91-7529-020-1). Master’s thesis. Örebro University, SE-70182 Örebro, Sweden (2014)
* Hussein, A., Elsawah, S., Abbass, H.: The reliability and transparency bases of trust in human-swam interaction: principles and implications. Ergonomics 63(9), 1116–1132 (2020)
Kaber, D.B., Onal, E., Endsley, M.R.: Design of automation for telerobots and the effect on performance, operator situation awareness, and subjective workload. Hum. Factors Ergon. Manuf. 10(4), 409–430 (2000)
* Kluck, M., Koh, S.C., Walliser, J.C., de Visser, E.J., Shaw, T.H.: Stereotypical of us to stereotype them: the effect of system-wide trust on heterogeneous populations of unmanned autonomous vehicles. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 62(1), 1103-1107 (2018)
* Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., Nass, C.: Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. 9, 269–275 (2015)
* Krake, A., et al.: Effects of training on learning and use of an adaptive cruise control system (Technical Paper). SAE (2020)
* Kunze, A., Summerskill, S.J., Marshall, R., Filtness, A.J.: Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics 62(3), 345–360 (2019)
Lai, F., Macmillan, J., Daudelin, D., Kent, D.: The potential of training to increase acceptance and use of computerized decision support systems for medical diagnosis. Hum. Factors 48(1), 95–108 (2006)
Lee, J.D., See, J.: Trust in automation and technology: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
* Loft, S., et al.: The impact of transparency and decision risk on human-automation teaming outcomes. Hum. Factors (2021)
* Mercado, J., Rupp, M., Chen, J., Barnes, M., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Hum. Factors 58(3), 401–415 (2016)
* Meteier, Q., et al.: The effect of instructions and context-related information about limitations of conditionally automated vehicles on situation awareness. In: Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (2020)
Mifsud, D., Wickens, C., Maulbeck, M., Crane, P., Ortega, F.: The effectiveness of gaze guidance lines in supporting JTAC’s attention allocation. In: Proceedings of 66th Annual Meeting the Human Factors and Ergonomics Society. Sage Press (2022)
Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein, G.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI (Technical Report). Florida Institute for Human and Machine Cognition (2019). https://apps.dtic.mil/sti/citations/AD1073994
* Olatunji, S., Oron-Gilad, T., Markfeld, N., Gutman, D., Sarne-Fleischmann, V., Edan, Y.: Levels of automation and transparency: interaction design considerations in assistive robots for older adults. IEEE Trans. Hum.-Mach. Syst. 51(6), 673–683 (2021)
Onnasch, L., Wickens, C., Li, H., Manzey, D.: Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum. Factors 56(3), 476–488 (2014)
* Panganiban, A.R., Matthews, G., Long, M.D.: Transparency in autonomous teammates: intention to support as teaming information. J. Cogn. Eng. Decis. Mak. 14(2), 174–190 (2020)
Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52(3), 381–410 (2010)
Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model of types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. 30(3), 286–297 (2000)
Rajabiyazdi, F., Jamieson, G.A., Guanolusia, D.Q.: An empirical study on automation transparency (i.e., seeing-into) of an automated decision aid system for condition-based maintenance. In: Black, N.L., Neumann, W.P., Noy, I. (eds.) IEA 2021. LNNS, vol. 223, pp. 675–682. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-74614-8_84
* Rayo, M., Kowalczyk, N., Liston, B., Sanders, E., White, S., Patterson, E.: Comparing the effectiveness of alerts and dynamically annotated visualizations (DAVs) in improving clinical decision making. Hum. Factors 57(6), 1002–1014 (2015)
* Roth, G., Schulte, A., Schmitt, F., Brand, Y.: Transparency for a workload-adaptive cognitive agent in a manned-unmanned teaming application. IEEE Trans. Hum.-Mach. Syst. 50(3), 225–233 (2020)
* Rovira, E., Cross, A., Leitch, E., Bonaceto, C.: Display contextual information reduces the costs of imperfect decision automation in rapid retasking of ISR assets. Hum. Factors 56(6), 1036–1049 (2014)
Sebok, A., Wickens, C.D.: Implementing lumberjacks and black swans into model-based tools to support human-automation interaction. Hum. Factors 59(2), 189–202 (2017)
* Selkowitz, A., Lakhmani, S., Chen, J.Y., Boyce, M.: The effects of agent transparency on human interaction with an autonomous robotic agent. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 59(1), 806–810 (2015)
* Selkowitz, A.R., Lakhmani, S.G., Larios, C.N., Chen, J.Y.C.: Agent transparency and the autonomous squad member. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 60(1), 1319–1323 (2016)
* Seong, Y., Bisantz, A.M.: The impact of cognitive feedback on judgment performance and trust with decision aids. Int. J. Ind. Ergon. 38(7), 608–625 (2008)
* Seppelt, B.D., Lee, J.D.: Making adaptive cruise control (ACC) limits visible. Int. J. Hum.-Comput. Stud. 65, 192–205 (2007)
* Shull, E., Gaspar, J., McGehee, D., Schmitt, R.: Using human-machine interfaces to convey feedback in automated driving. J. Cogn. Eng. Decis. Making 16(1) (2022)
* Skraaning, G., Jamieson, G.: Human performance benefits of the automation transparency design principle: validation and variation. Hum. Factors 63(3), 379–410 (2021)
* Stowers, K., Kasdaglis, N., Newton, O., Lakhmani, S., Wohleber, R., Chen, J.: Intelligent agent transparency: the design and evaluation of an interface to facilitate human and intelligent agent collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 60(1), 1706–1710 (2016)
* Stowers, K., Kasdaglis, N., Rupp, M.A., Newton, O.B., Chen, J.Y., Barnes, M.J.: The IMPACT of agent transparency on human performance. IEEE Trans. Hum.-Mach. Syst. 50(3), 245–253 (2020)
* Trapsilawati, F., Wickens, C., Chen, H., Qu, X.: Transparency and automation conflict resolution reliability in air traffic control. In: Tsang, P., Vidulich, M., Flach, J. (eds.) Proceedings of the 2017 International Symposium on Aviation Psychology. Wright State University, Dayton, OH (2017)
* unknown author. Trust in automation as a function of transparency and teaming. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 63(1), 78–82 (2019)
Van de Merwe, K., Mallam, S., Nazir, S.: Agent transparency, situation awareness, mental workload and operator performance: a systematic literature review. Hum. Factors 1–29 (2022)
* Verhagen, R.S., Neerincx, M.A., Tielman, M.L.: The influence of interdependence and a transparent or explainable communication style on human-robot teamwork. Front. Robot. AI (2022)
* Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: Proceedings of the 2016 ACM/IEEE International Conference on Human-Robot Interaction, pp. 109–116 (2016)
Warden, A.C., Wickens, C.D., Mifsud, D., Ourada, S., Clegg, B.A., Ortega, F.R.: Visual search in augmented reality: effect of target cue type and location. Proc. Hum. Factors Soc. Annu. Meet. 66(1), 373–377 (2022)
* Westin, C., Borst, C., Hilburn, B.: Automation transparency and personalized decision support: air traffic controller interaction with a resolution advisory system. IFAC-PapersOnLine 49, 201–206 (2016)
Wickens, C.D., Clegg, B.A., Vieane, A.Z., Sebok, A.L.: Complacency and automation bias in the use of imperfect automation. Hum. Factors 57(5), 728–739 (2015)
Wickens, C., Helton, W., Hollands, J., Banbury, S.: Engineering Psychology and Human Performance, 5th edn. Taylor & Francis (2021)
* Wohleber, R.W., Stowers, K., Chen, J.Y.C., Barnes, M.: Conducting polyphonic human-robot communication: mastering crescendos and diminuendos in transparency. In: Cassenti, D., Scataglini, S., Rajulu, S., Wright, J. (eds.) Advances in Simulation and Digital Human Modeling. Advances in Intelligent Systems and Computing, vol. 1206, pp. 10–17. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51064-0_2
* Wright, J.L., Chen, J.Y.C., Barnes, M.J., Hancock, P.A.: Agent reasoning transparency’s effect on operator workload. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 60(1), 249–253 (2016)
* Wright, J.L., Chen, J.Y.C., Barnes, M.J., Hancock, P.A.: The effect of agent reasoning transparency on complacent behavior: an analysis of eye movements and response performance. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 61(1), 1594–1598 (2017)
* Wright, J.L., Lee, J., Schreck, J.A.: Human-autonomy teaming with learning capable agents: performance and workload outcomes. In: Wright, J.L., Barber, D., Scataglini, S., Rajulu, S.L. (eds.) Advances in Simulation and Digital Human Modeling, vol. 264. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-79763-8_1
* Zhang, W., Feltner, D., Kaber, D.B., Shirley, J.: Utility of functional transparency and usability in UAV supervisory control interface design. Int. J. Soc. Robot. 13(7) (2021)
* Zhang, Y., Wang, W., Zhou, X., Wang, Q.: Tactical-level explanation is not enough: effect of explaining AV’s lane-changing decisions on drivers’ decision-making, trust, and emotional experience. Int. J. Hum.-Comput. Interact. (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sargent, R., Walters, B., Wickens, C. (2023). Meta-analysis Qualifying and Quantifying the Benefits of Automation Transparency to Enhance Models of Human Performance. In: Kurosu, M., Hashizume, A. (eds) Human-Computer Interaction. HCII 2023. Lecture Notes in Computer Science, vol 14011. Springer, Cham. https://doi.org/10.1007/978-3-031-35596-7_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-35596-7_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35595-0
Online ISBN: 978-3-031-35596-7
eBook Packages: Computer ScienceComputer Science (R0)