Conversational Machine Comprehension

  • Chapter
  • First Online:
Neural Approaches to Conversational Information Retrieval

Part of the book series: The Information Retrieval Series ((INRE,volume 44))

  • 361 Accesses

Abstract

This chapter discusses neural approaches to conversational machine comprehension (CMC). A CMC module, which is often referred to as reader, generates a direct answer to a user query based on query-relevant documents retrieved by the document search module.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://huggingface.co.

References

  • Aliannejadi, M., Zamani, H., Crestani, F., & Croft, W. B. (2019). Asking clarifying questions in open-domain information-seeking conversations. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 475–484).

    Google Scholar 

  • Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 135–146.

    Article  Google Scholar 

  • Chen, Y., Wu, L., & Zaki, M. J. (2019b). Graphflow: Exploiting conversation flow with graph neural networks for conversational machine comprehension. Preprint. ar**v:1908.00059.

    Google Scholar 

  • Cheng, H., Shen, Y., Liu, X., He, P., Chen, W., & Gao, J. (2021). UnitedQA: A hybrid approach for open domain question answering. Preprint. ar**v:2101.00178.

    Google Scholar 

  • Choi, E., He, H., Iyyer, M., Yatskar, M., Yih, W.-t., Choi, Y., Liang, P., & Zettlemoyer, L. (2018). QuAC: Question answering in context. Preprint. ar**v:1808.07036.

    Google Scholar 

  • Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. Preprint. ar**v:1810.04805.

    Google Scholar 

  • Elgohary, A., Zhao, C., & Boyd-Graber, J. (2018). Dataset and baselines for sequential open-domain question answering. In Empirical methods in natural language processing.

    Google Scholar 

  • Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Gao, J., & Poon, H. (2020). Domain-specific language model pretraining for biomedical natural language processing. Preprint. ar**v:2007.15779.

    Google Scholar 

  • Gupta, S., Rawat, B. P. S., & Yu, H. (2020). Conversational machine comprehension: a literature review. Preprint. ar**v:2006.00671.

    Google Scholar 

  • He, P., Liu, X., Gao, J., & Chen, W. (2020). DeBERTa: Decoding-enhanced BERT with disentangled attention. Preprint. ar**v:2006.03654.

    Google Scholar 

  • Huang, H.-Y., Choi, E., & Yih, W.-t. (2018). FlowQA: Gras** flow in history for conversational machine comprehension. Preprint. ar**%20flow%20in%20history%20for%20conversational%20machine%20comprehension.%20Preprint.%20ar**v%3A1810.06683."> Google Scholar 

  • Izacard, G., & Grave, E. (2020). Leveraging passage retrieval with generative models for open domain question answering. Preprint. ar**v:2007.01282.

    Google Scholar 

  • Joshi, M., Choi, E., Weld, D. S., & Zettlemoyer, L. (2017). TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. Preprint. ar**v:1705.03551.

    Google Scholar 

  • Ju, Y., Zhao, F., Chen, S., Zheng, B., Yang, X., & Liu, Y. (2019). Technical report on conversational question answering. Preprint. ar**v:1909.10772.

    Google Scholar 

  • Kim, Y., Jernite, Y., Sontag, D., & Rush, A. (2016). Character-aware neural language models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 30).

    Google Scholar 

  • Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., et al. (2019). Natural questions: A benchmark for question answering research. Transactions of the Association of Computational Linguistics, 7, 453–466.

    Article  Google Scholar 

  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T., et al. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Preprint. ar**v:2005.11401.

    Google Scholar 

  • Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019c). RoBERTa: A robustly optimized Bert pretraining approach. Preprint. ar**v:1907.11692.

    Google Scholar 

  • Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (pp. 3111–3119).

    Google Scholar 

  • Min, S., Michael, J., Hajishirzi, H., & Zettlemoyer, L. (2020). AmbigQA: Answering ambiguous open-domain questions. Preprint. ar**v:2004.10645.

    Google Scholar 

  • Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., & Deng, L. (2016). MS MARCO: A human generated machine reading comprehension dataset. Preprint. ar**v:1611.09268.

    Google Scholar 

  • Ohsugi, Y., Saito, I., Nishida, K., Asano, H., & Tomita, J. (2019). A simple but effective method to incorporate multi-turn context with Bert for conversational machine comprehension. Preprint. ar**v:1905.12848.

    Google Scholar 

  • Pennington, J., Socher, R., & Manning, C. (2014). GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1532–1543).

    Google Scholar 

  • Qu, C., Yang, L., Qiu, M., Zhang, Y., Chen, C., Croft, W. B., & Iyyer, M. (2019). Attentive history selection for conversational question answering. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (pp. 1391–1400).

    Google Scholar 

  • Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. Preprint. ar**v:1910.10683.

    Google Scholar 

  • Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). SQuAD: 100,000+ questions for machine comprehension of text. Preprint. ar**v:1606.05250.

    Google Scholar 

  • Reddy, S., Chen, D., & Manning, C. D. (2018). CoQA: A conversational question answering challenge. Preprint. ar**v:1808.07042.

    Google Scholar 

  • Reddy, S., Chen, D., & Manning, C. D. (2019). CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7, 249–266.

    Article  Google Scholar 

  • Sennrich, R., Haddow, B., & Birch, A. (2016). Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Long Papers) (Vol. 1, pp. 1715–1725).

    Google Scholar 

  • Seo, M., Kembhavi, A., Farhadi, A., & Hajishirzi, H. (2016). Bidirectional attention flow for machine comprehension. Preprint. ar**v:1611.01603.

    Google Scholar 

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30 (pp. 5998–6008). Curran Associates, Inc.

    Google Scholar 

  • Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, L., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., & Dean, J. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.

    Google Scholar 

  • Yatskar, M. (2018). A qualitative comparison of CoQA, squad 2.0 and QuAC. Preprint. ar**v:1809.10735.

    Google Scholar 

  • Yeh, Y.-T., & Chen, Y.-N. (2019). FlowDelta: Modeling flow information gain in reasoning for conversational machine comprehension. Preprint. ar**v:1908.05117.

    Google Scholar 

  • Zhu, C., Zeng, M., & Huang, X. (2018). SDNet: Contextualized attention-based deep network for conversational question answering. Preprint. ar**v:1812.03593.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Gao, J., **ong, C., Bennett, P., Craswell, N. (2023). Conversational Machine Comprehension. In: Neural Approaches to Conversational Information Retrieval. The Information Retrieval Series, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-031-23080-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23080-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23079-0

  • Online ISBN: 978-3-031-23080-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation