A Comparative Study of BERT-Based Attention Flows Versus Human Attentions on Fill-in-Blank Task

  • Conference paper
  • First Online:
HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13518))

Included in the following conference series:

  • 1593 Accesses

Abstract

The purpose of this small-scale study is to compare BERT language model’s attention flow—which quantifies the marginal contributions from each word and aggregated word groups towards the fill-in-blank prediction— with human evaluators’ opinions on the same task. Based on a limited number of experiments performed, we have the following findings: (1) Compared with human evaluators, BERT base model pay less attention towards verbs, and more attention towards noun and other word types. That seems to agree with the natural partition hypothesis: nouns predominate over verbs in children’s initial vocabularies because it is easy to understand the meanings of nouns. The premise of such hypothesis is that BERT base model performs like a human child. (2) As sentences become longer and more complex, human evaluators can distinguish the major logic relation and be less distracted by other components in the structure. The attention flow scores calculated using the BERT base model, on the other hand, amortize towards multiple words and word groups as sentences become longer and more complex. (3) Amortized attention flow scores calculated using BERT base model provides a balanced global view towards different types of discourse relations embedded in long and complex sentences. For future works, more examples will be prepared for detailed and rigorous verifications on the findings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp. 80–89. IEEE, October 2018

    Google Scholar 

  2. Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., Sen, P.: A survey of the state of explainable AI for natural language processing. ar**v preprint ar**v:2010.00711 (2020)

  3. Abnar, S., Zuidema, W.: Quantifying attention flow in transformers. ar**v preprint ar**v:2005.00928 (2020)

  4. Ethayarajh, K., Jurafsky, D.: Attention flows are Shapley value explanations. ar**v preprint ar**v:2105.14652 (2021)

  5. Martin, J.R.: Evolving systemic functional linguistics: beyond the clause. Funct. Linguist. 1(1), 1–24 (2014). https://doi.org/10.1186/2196-419X-1-3

    Article  Google Scholar 

  6. Halliday, M.A.K., Matthiessen, C.M.I.M.: Halliday’s Introduction to Functional Grammar. Routledge, Abingdon (2013)

    Book  Google Scholar 

  7. Eggins, S.: Introduction to systemic functional linguistics. A&C Black (2004)

    Google Scholar 

  8. Rogers, A., Kovaleva, O., Rumshisky, A.: A primer in BERTology: what we know about how BERT works. Trans. Assoc. Computat. Linguist. 8, 842–866 (2020)

    Article  Google Scholar 

  9. Imai, M., et al.: Revisiting the noun-verb debate: a cross-linguistic comparison of novel noun and verb learning in English-, Japanese-, and Chinese-Speaking Children. In: Action Meets Word: How Children Learn Verbs, p. 450 (2006)

    Google Scholar 

  10. Gentner, D., Boroditsky, L.: In individuation, relativity, and early word learning. In: Bowerman, M., Levinson, S. (eds.) Language Acquisition and Conceptual Development, pp. 257–283. Cambridge University Press, Cambridge (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Qian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qian, M., Lee, K.W. (2022). A Comparative Study of BERT-Based Attention Flows Versus Human Attentions on Fill-in-Blank Task. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds) HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence. HCII 2022. Lecture Notes in Computer Science, vol 13518. Springer, Cham. https://doi.org/10.1007/978-3-031-21707-4_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21707-4_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21706-7

  • Online ISBN: 978-3-031-21707-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation