Generating Contextually Coherent Responses by Learning Structured Vectorized Semantics

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12682))

Included in the following conference series:

  • 2916 Accesses

Abstract

Generating contextually coherent responses has been one of the most critical challenges in building intelligent dialogue systems. Key issues are how to appropriately encode contexts and how to make good use of them during the generation. Past works either directly use (hierarchical) RNN to encode contexts or use attention-based variants to further weight different words and utterances. They tend to learn dispersed focuses over all contextual information, which contradicts the facts that humans tend to respond to certain concentrated semantics of contexts. This leads to the results that generated responses are only show semantically related to, but not precisely coherent with the given contexts. To this end, this paper proposes a contextually coherent dialogue generation (ConDial) method by first encoding contexts into structured semantic vectors using self-attention, and then adaptively choosing key semantic vectors to guide the response generation. Based on the structured semantics, it also develops a calibration mechanism with a dynamic vocabulary during decoding, which enhances exact coherent expressions by adjusting word distribution. According to the experiments, ConDial shows better generative performance than state-of-the-arts and is capable of generating responses that not only continue the topics but also keep coherent contextual expressions.

Y. Wang and Y. Zheng—Both are first authors with equal contributions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 85.59
Price includes VAT (France)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 105.49
Price includes VAT (France)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://yanran.li/dailydialog.html.

  2. 2.

    https://github.com/cgpotts/swda.

  3. 3.

    https://github.com/tensorflow/tensorflow.

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: ICLR (2015)

    Google Scholar 

  2. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Józefowicz, R., Bengio, S.: Generating sentences from a continuous space. In: CoNLL, pp. 10–21. ACL (2016)

    Google Scholar 

  3. Chen, H., Liu, X., Yin, D., Tang, J.: A survey on dialogue systems: recent advances and new frontiers. SIGKDD Explor. 19(2), 25–35 (2017)

    Article  Google Scholar 

  4. Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. In: ACL, no. 1. The Association for Computer Linguistics (2016)

    Google Scholar 

  5. Huang, M., Zhu, X., Gao, J.: Challenges in building intelligent open-domain dialog systems. ACM Trans. Inf. Syst. 38(3), 21:1–21:32 (2020)

    Google Scholar 

  6. Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.: A diversity-promoting objective function for neural conversation models. In: HLT-NAACL, pp. 110–119 (2016)

    Google Scholar 

  7. Lin, Z., et al.: A structured self-attentive sentence embedding. In: ICLR. OpenReview.net (2017)

    Google Scholar 

  8. Liu, C., Lowe, R., Serban, I., Noseworthy, M., Charlin, L., Pineau, J.: How NOT to evaluate your dialogue system: an empirical study of unsupervised evaluation metrics for dialogue response generation. In: EMNLP, pp. 2122–2132 (2016)

    Google Scholar 

  9. Papineni, K., Roukos, S., Ward, T., Zhu, W.: Bleu: a method for automatic evaluation of machine translation. In: ACL, pp. 311–318. ACL (2002)

    Google Scholar 

  10. Serban, I.V., Sordoni, A., et al.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: AAAI, pp. 3776–3784 (2016)

    Google Scholar 

  11. Serban, I.V., Sordoni, A., et al.: A hierarchical latent variable encoder-decoder model for generating dialogues. In: AAAI, pp. 3295–3301. AAAI Press (2017)

    Google Scholar 

  12. Shen, X., Su, H., Niu, S., Demberg, V.: Improving variational encoder-decoders in dialogue generation. In: AAAI, pp. 5456–5463. AAAI Press (2018)

    Google Scholar 

  13. Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. In: NIPS, pp. 3483–3491 (2015)

    Google Scholar 

  14. Sordoni, A., et al.: A neural network approach to context-sensitive generation of conversational responses. In: HLT-NAACL, pp. 196–205 (2015)

    Google Scholar 

  15. Tao, C., et al.: Get the point of my utterance! learning towards effective responses with multi-head attention mechanism. In: IJCAI, pp. 4418–4424. ijcai.org (2018)

    Google Scholar 

  16. Tian, Z., Yan, R., Mou, L., Song, Y., Feng, Y., Zhao, D.: How to make context more useful? an empirical study on context-aware neural conversational models. In: ACL, no. 2, pp. 231–236. Association for Computational Linguistics (2017)

    Google Scholar 

  17. Vinyals, O., Le, Q.V.: A neural conversational model. In: ICML (2015)

    Google Scholar 

  18. Wang, W., Huang, M., Xu, X., Shen, F., Nie, L.: Chat more: deepening and widening the chatting topic via a deep model. In: SIGIR, pp. 255–264. ACM (2018)

    Google Scholar 

  19. Wang, Y., Liu, C., Huang, M., Nie, L.: Learning to ask questions in open-domain conversational systems with typed decoders. In: ACL, no. 1, pp. 2193–2203 (2018)

    Google Scholar 

  20. **ng, C., Wu, Y., Wu, W., Huang, Y., Zhou, M.: Hierarchical recurrent attention network for response generation. In: AAAI, pp. 5610–5617. AAAI Press (2018)

    Google Scholar 

  21. Zhang, W., Cui, Y., Wang, Y., Zhu, Q., Li, L., et al.: Context-sensitive generation of open-domain conversational responses. In: COLING, pp. 2437–2447 (2018)

    Google Scholar 

  22. Zhao, T., Zhao, R., et al.: Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In: ACL, pp. 654–664 (2017)

    Google Scholar 

  23. Zhou, H., Huang, M., Zhang, T., Zhu, X., Liu, B.: Emotional chatting machine: emotional conversation generation with internal and external memory. In: AAAI, pp. 730–739. AAAI Press (2018)

    Google Scholar 

Download references

Acknowledgements

The work was supported by the National Key Research and Development Program of China (No.2016YFB1001101)

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y., Zheng, Y., Jiang, S., Dong, Y., Chen, J., Wang, S. (2021). Generating Contextually Coherent Responses by Learning Structured Vectorized Semantics. In: Jensen, C.S., et al. Database Systems for Advanced Applications. DASFAA 2021. Lecture Notes in Computer Science(), vol 12682. Springer, Cham. https://doi.org/10.1007/978-3-030-73197-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-73197-7_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-73196-0

  • Online ISBN: 978-3-030-73197-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation