Attending Knowledge Facts with BERT-like Models in Question-Answering: Disappointing Results and Some Explanations

  • Conference paper
  • First Online:
Advances in Artificial Intelligence (Canadian AI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12109))

Included in the following conference series:

  • 2452 Accesses

Abstract

Since the first appearance of BERT, pretrained BERT inspired models (XLNet, Roberta, ...) have delivered state-of-the-art results in a large number of Natural Language Processing tasks. This includes question-answering where previous models performed relatively poorly particularly on datasets with a limited amount of data. In this paper we perform experiments with BERT on two such datasets that are OpenBookQA and ARC. Our aim is to understand why, in our experiments, using BERT sentence representations inside an attention mechanism on a set of facts tends to give poor results. We demonstrate that in some cases, the sentence representations proposed by BERT are limited in terms of semantic and that BERT often answers the questions in a meaningless way.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 42.79
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 53.49
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: unanswerable questions for squad. CoRR, abs/1806.03822 (2018)

    Google Scholar 

  2. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100, 000+ questions for machine comprehension of text. CoRR, abs/1606.05250 (2016)

    Google Scholar 

  3. Reddy, S., Chen, D., Manning, C.D.: CoQA: a conversational question answering challenge. CoRR, abs/1808.07042 (2018)

    Google Scholar 

  4. Lai, G., **e, Q., Liu, H., Yang, Y., Hovy, E.H.: RACE: large-scale reading comprehension dataset from examinations. CoRR, abs/1704.04683 (2017)

    Google Scholar 

  5. Mihaylov, T., Clark, P., Khot, T., Sabharwal, A.: Can a suit of armor conduct electricity? A new dataset for open book question answering. CoRR, abs/1809.02789 (2018)

    Google Scholar 

  6. Clark, P., et al.: Think you have solved question answering? Try ARC, the AI2 reasoning challenge. CoRR, abs/1803.05457 (2018)

    Google Scholar 

  7. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805 (2018)

    Google Scholar 

  8. Zhu, Y.: Aligning books and movies: towards story-like visual explanations by watching movies and reading books. ar**v e-prints, page ar**v:1506.06724, June 2015

  9. Vaswani, A., et al.: Attention is all you need. ar**v e-prints, page ar**v:1706.03762, June 2017

  10. Yang, Z., Dai, Z., Yang, Y., Carbonell, J.G., Salakhutdinov, R., Le, Q.V.: XLNET: generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237 (2019)

    Google Scholar 

  11. Liu, Y.: RoBERTa: a robustly optimized BERT pretraining approach. CoRR, abs/1907.11692 (2019)

    Google Scholar 

  12. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. ar**v e-prints, page ar**v:1908.10084, August 2019

  13. Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 632–642. Association for Computational Linguistics, September 2015

    Google Scholar 

  14. Hudson, D.A., Manning, C.D.: Compositional attention networks for machine reasoning. CoRR, abs/1803.03067 (2018)

    Google Scholar 

  15. Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., Girshick, R.B.: CLEVR: a diagnostic dataset for compositional language and elementary visual reasoning. CoRR, abs/1612.06890 (2016)

    Google Scholar 

  16. Wolf, T.: HuggingFace’s transformers: state-of-the-art natural language processing (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guillaume Le Berre .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Le Berre, G., Langlais, P. (2020). Attending Knowledge Facts with BERT-like Models in Question-Answering: Disappointing Results and Some Explanations. In: Goutte, C., Zhu, X. (eds) Advances in Artificial Intelligence. Canadian AI 2020. Lecture Notes in Computer Science(), vol 12109. Springer, Cham. https://doi.org/10.1007/978-3-030-47358-7_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-47358-7_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-47357-0

  • Online ISBN: 978-3-030-47358-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation