Abstract
This chapter discusses neural approaches to conversational machine comprehension (CMC). A CMC module, which is often referred to as reader, generates a direct answer to a user query based on query-relevant documents retrieved by the document search module.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Aliannejadi, M., Zamani, H., Crestani, F., & Croft, W. B. (2019). Asking clarifying questions in open-domain information-seeking conversations. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 475–484).
Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 135–146.
Chen, Y., Wu, L., & Zaki, M. J. (2019b). Graphflow: Exploiting conversation flow with graph neural networks for conversational machine comprehension. Preprint. ar**v:1908.00059.
Cheng, H., Shen, Y., Liu, X., He, P., Chen, W., & Gao, J. (2021). UnitedQA: A hybrid approach for open domain question answering. Preprint. ar**v:2101.00178.
Choi, E., He, H., Iyyer, M., Yatskar, M., Yih, W.-t., Choi, Y., Liang, P., & Zettlemoyer, L. (2018). QuAC: Question answering in context. Preprint. ar**v:1808.07036.
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. Preprint. ar**v:1810.04805.
Elgohary, A., Zhao, C., & Boyd-Graber, J. (2018). Dataset and baselines for sequential open-domain question answering. In Empirical methods in natural language processing.
Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Gao, J., & Poon, H. (2020). Domain-specific language model pretraining for biomedical natural language processing. Preprint. ar**v:2007.15779.
Gupta, S., Rawat, B. P. S., & Yu, H. (2020). Conversational machine comprehension: a literature review. Preprint. ar**v:2006.00671.
He, P., Liu, X., Gao, J., & Chen, W. (2020). DeBERTa: Decoding-enhanced BERT with disentangled attention. Preprint. ar**v:2006.03654.
Huang, H.-Y., Choi, E., & Yih, W.-t. (2018). FlowQA: Gras** flow in history for conversational machine comprehension. Preprint. ar**%20flow%20in%20history%20for%20conversational%20machine%20comprehension.%20Preprint.%20ar**v%3A1810.06683."> Google ScholarÂ
Izacard, G., & Grave, E. (2020). Leveraging passage retrieval with generative models for open domain question answering. Preprint. ar**v:2007.01282.
Joshi, M., Choi, E., Weld, D. S., & Zettlemoyer, L. (2017). TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. Preprint. ar**v:1705.03551.
Ju, Y., Zhao, F., Chen, S., Zheng, B., Yang, X., & Liu, Y. (2019). Technical report on conversational question answering. Preprint. ar**v:1909.10772.
Kim, Y., Jernite, Y., Sontag, D., & Rush, A. (2016). Character-aware neural language models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 30).
Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., et al. (2019). Natural questions: A benchmark for question answering research. Transactions of the Association of Computational Linguistics, 7, 453–466.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T., et al. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Preprint. ar**v:2005.11401.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019c). RoBERTa: A robustly optimized Bert pretraining approach. Preprint. ar**v:1907.11692.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (pp. 3111–3119).
Min, S., Michael, J., Hajishirzi, H., & Zettlemoyer, L. (2020). AmbigQA: Answering ambiguous open-domain questions. Preprint. ar**v:2004.10645.
Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., & Deng, L. (2016). MS MARCO: A human generated machine reading comprehension dataset. Preprint. ar**v:1611.09268.
Ohsugi, Y., Saito, I., Nishida, K., Asano, H., & Tomita, J. (2019). A simple but effective method to incorporate multi-turn context with Bert for conversational machine comprehension. Preprint. ar**v:1905.12848.
Pennington, J., Socher, R., & Manning, C. (2014). GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1532–1543).
Qu, C., Yang, L., Qiu, M., Zhang, Y., Chen, C., Croft, W. B., & Iyyer, M. (2019). Attentive history selection for conversational question answering. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (pp. 1391–1400).
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. Preprint. ar**v:1910.10683.
Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). SQuAD: 100,000+ questions for machine comprehension of text. Preprint. ar**v:1606.05250.
Reddy, S., Chen, D., & Manning, C. D. (2018). CoQA: A conversational question answering challenge. Preprint. ar**v:1808.07042.
Reddy, S., Chen, D., & Manning, C. D. (2019). CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7, 249–266.
Sennrich, R., Haddow, B., & Birch, A. (2016). Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Long Papers) (Vol. 1, pp. 1715–1725).
Seo, M., Kembhavi, A., Farhadi, A., & Hajishirzi, H. (2016). Bidirectional attention flow for machine comprehension. Preprint. ar**v:1611.01603.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30 (pp. 5998–6008). Curran Associates, Inc.
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, L., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., & Dean, J. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
Yatskar, M. (2018). A qualitative comparison of CoQA, squad 2.0 and QuAC. Preprint. ar**v:1809.10735.
Yeh, Y.-T., & Chen, Y.-N. (2019). FlowDelta: Modeling flow information gain in reasoning for conversational machine comprehension. Preprint. ar**v:1908.05117.
Zhu, C., Zeng, M., & Huang, X. (2018). SDNet: Contextualized attention-based deep network for conversational question answering. Preprint. ar**v:1812.03593.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Gao, J., **ong, C., Bennett, P., Craswell, N. (2023). Conversational Machine Comprehension. In: Neural Approaches to Conversational Information Retrieval. The Information Retrieval Series, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-031-23080-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-23080-6_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23079-0
Online ISBN: 978-3-031-23080-6
eBook Packages: Computer ScienceComputer Science (R0)