Abstract
The purpose of this small-scale study is to compare BERT language model’s attention flow—which quantifies the marginal contributions from each word and aggregated word groups towards the fill-in-blank prediction— with human evaluators’ opinions on the same task. Based on a limited number of experiments performed, we have the following findings: (1) Compared with human evaluators, BERT base model pay less attention towards verbs, and more attention towards noun and other word types. That seems to agree with the natural partition hypothesis: nouns predominate over verbs in children’s initial vocabularies because it is easy to understand the meanings of nouns. The premise of such hypothesis is that BERT base model performs like a human child. (2) As sentences become longer and more complex, human evaluators can distinguish the major logic relation and be less distracted by other components in the structure. The attention flow scores calculated using the BERT base model, on the other hand, amortize towards multiple words and word groups as sentences become longer and more complex. (3) Amortized attention flow scores calculated using BERT base model provides a balanced global view towards different types of discourse relations embedded in long and complex sentences. For future works, more examples will be prepared for detailed and rigorous verifications on the findings.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp. 80–89. IEEE, October 2018
Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., Sen, P.: A survey of the state of explainable AI for natural language processing. ar**v preprint ar**v:2010.00711 (2020)
Abnar, S., Zuidema, W.: Quantifying attention flow in transformers. ar**v preprint ar**v:2005.00928 (2020)
Ethayarajh, K., Jurafsky, D.: Attention flows are Shapley value explanations. ar**v preprint ar**v:2105.14652 (2021)
Martin, J.R.: Evolving systemic functional linguistics: beyond the clause. Funct. Linguist. 1(1), 1–24 (2014). https://doi.org/10.1186/2196-419X-1-3
Halliday, M.A.K., Matthiessen, C.M.I.M.: Halliday’s Introduction to Functional Grammar. Routledge, Abingdon (2013)
Eggins, S.: Introduction to systemic functional linguistics. A&C Black (2004)
Rogers, A., Kovaleva, O., Rumshisky, A.: A primer in BERTology: what we know about how BERT works. Trans. Assoc. Computat. Linguist. 8, 842–866 (2020)
Imai, M., et al.: Revisiting the noun-verb debate: a cross-linguistic comparison of novel noun and verb learning in English-, Japanese-, and Chinese-Speaking Children. In: Action Meets Word: How Children Learn Verbs, p. 450 (2006)
Gentner, D., Boroditsky, L.: In individuation, relativity, and early word learning. In: Bowerman, M., Levinson, S. (eds.) Language Acquisition and Conceptual Development, pp. 257–283. Cambridge University Press, Cambridge (2001)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Qian, M., Lee, K.W. (2022). A Comparative Study of BERT-Based Attention Flows Versus Human Attentions on Fill-in-Blank Task. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds) HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence. HCII 2022. Lecture Notes in Computer Science, vol 13518. Springer, Cham. https://doi.org/10.1007/978-3-031-21707-4_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-21707-4_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21706-7
Online ISBN: 978-3-031-21707-4
eBook Packages: Computer ScienceComputer Science (R0)