Abstract
Since the first appearance of BERT, pretrained BERT inspired models (XLNet, Roberta, ...) have delivered state-of-the-art results in a large number of Natural Language Processing tasks. This includes question-answering where previous models performed relatively poorly particularly on datasets with a limited amount of data. In this paper we perform experiments with BERT on two such datasets that are OpenBookQA and ARC. Our aim is to understand why, in our experiments, using BERT sentence representations inside an attention mechanism on a set of facts tends to give poor results. We demonstrate that in some cases, the sentence representations proposed by BERT are limited in terms of semantic and that BERT often answers the questions in a meaningless way.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: unanswerable questions for squad. CoRR, abs/1806.03822 (2018)
Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100, 000+ questions for machine comprehension of text. CoRR, abs/1606.05250 (2016)
Reddy, S., Chen, D., Manning, C.D.: CoQA: a conversational question answering challenge. CoRR, abs/1808.07042 (2018)
Lai, G., **e, Q., Liu, H., Yang, Y., Hovy, E.H.: RACE: large-scale reading comprehension dataset from examinations. CoRR, abs/1704.04683 (2017)
Mihaylov, T., Clark, P., Khot, T., Sabharwal, A.: Can a suit of armor conduct electricity? A new dataset for open book question answering. CoRR, abs/1809.02789 (2018)
Clark, P., et al.: Think you have solved question answering? Try ARC, the AI2 reasoning challenge. CoRR, abs/1803.05457 (2018)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805 (2018)
Zhu, Y.: Aligning books and movies: towards story-like visual explanations by watching movies and reading books. ar**v e-prints, page ar**v:1506.06724, June 2015
Vaswani, A., et al.: Attention is all you need. ar**v e-prints, page ar**v:1706.03762, June 2017
Yang, Z., Dai, Z., Yang, Y., Carbonell, J.G., Salakhutdinov, R., Le, Q.V.: XLNET: generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237 (2019)
Liu, Y.: RoBERTa: a robustly optimized BERT pretraining approach. CoRR, abs/1907.11692 (2019)
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. ar**v e-prints, page ar**v:1908.10084, August 2019
Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 632–642. Association for Computational Linguistics, September 2015
Hudson, D.A., Manning, C.D.: Compositional attention networks for machine reasoning. CoRR, abs/1803.03067 (2018)
Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., Girshick, R.B.: CLEVR: a diagnostic dataset for compositional language and elementary visual reasoning. CoRR, abs/1612.06890 (2016)
Wolf, T.: HuggingFace’s transformers: state-of-the-art natural language processing (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Le Berre, G., Langlais, P. (2020). Attending Knowledge Facts with BERT-like Models in Question-Answering: Disappointing Results and Some Explanations. In: Goutte, C., Zhu, X. (eds) Advances in Artificial Intelligence. Canadian AI 2020. Lecture Notes in Computer Science(), vol 12109. Springer, Cham. https://doi.org/10.1007/978-3-030-47358-7_37
Download citation
DOI: https://doi.org/10.1007/978-3-030-47358-7_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-47357-0
Online ISBN: 978-3-030-47358-7
eBook Packages: Computer ScienceComputer Science (R0)