On Robustness of Generative Representations Against Catastrophic Forgetting

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Abstract

Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks. Although many continual learning methods are proposed to mitigate this drawback, the main question remains unanswered: what is the root cause of catastrophic forgetting? In this work, we aim at answering this question by posing and validating a set of research hypotheses related to the specificity of representations built internally by neural models. More specifically, we design a set of empirical evaluations that compare the robustness of representations in discriminative and generative models against catastrophic forgetting. We observe that representations learned by discriminative models are more prone to catastrophic forgetting than their generative counterparts, which sheds new light on the advantages of develo** generative models for continual learning. Finally, our work opens new research pathways and possibilities to adopt generative models in continual learning beyond mere replay mechanisms.

K. Deja—Work done prior joining Amazon.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Davidson, G., Mozer, M.C.: Sequential mastery of multiple visual tasks: networks naturally learn to learn and forget to forget. In: CVPR (2020)

    Google Scholar 

  2. French, R.M.: Catastrophic forgetting in connectionist networks. TiCS 3, 128–135 (1999)

    Google Scholar 

  3. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. PNAS 114, 3521–3526 (2017)

    Google Scholar 

  4. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: ICLR (2014)

    Google Scholar 

  5. Kornblith, S., et al.: Similarity of neural network representations revisited. In: ICML(2019)

    Google Scholar 

  6. Nguyen, G., et al.: Dissecting catastrophic forgetting in continual learning by deep visualization. ar**v (2020)

    Google Scholar 

  7. Parisi, G.I., et al.: Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019)

    Google Scholar 

  8. Prabhu, A., Torr, P.H.S., Dokania, P.K.: GDumb: a simple approach that questions our progress in continual learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 524–540. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_31

    Chapter  Google Scholar 

  9. Ramasesh, V., et al.: Anatomy of catastrophic forgetting: hidden representations and task semantics. In: ICLR (2021)

    Google Scholar 

  10. Rolnick, D., et al.: Experience Replay for Continual Learning. In: NeurIPS (2019)

    Google Scholar 

  11. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV (2015)

    Google Scholar 

  12. Rusu, A., et al.: Progressive neural networks. ar**v (2016)

    Google Scholar 

  13. Thai, A., et al.: Does continual learning = catastrophic forgetting? ar**v (2021)

    Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  15. van de Ven, G.M., Tolias, A.S.: Generative replay with feedback connections as a general strategy for continual learning. ar**v (2018)

    Google Scholar 

  16. Wu, Y.N., et al.: A tale of three probabilistic families: discriminative, descriptive and generative models (2018)

    Google Scholar 

  17. Yoon, J., et al.: Lifelong learning with dynamically expandable networks. In: ICLR (2018)

    Google Scholar 

  18. Zenke, F., et al.: Continual learning through synaptic intelligence. In: ICML (2017)

    Google Scholar 

Download references

Acknowledgment

This research was funded by National Science Centre, Poland (grant no 2020/39/ B/ST6/01511 and 2018/31/N/ST6/02374), Foundation for Polish Science (grant no POIR.04.04.00-00-14DE/ 18-00 carried out within the Team-Net program co-financed by the European Union under the European Regional Development Fund) and Warsaw University of Technology (POB Research Centre for Artificial Intelligence and Robotics within the Excellence Initiative Program - Research University). For the purpose of Open Access, the author has applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wojciech Masarczyk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Masarczyk, W., Deja, K., Trzcinski, T. (2021). On Robustness of Generative Representations Against Catastrophic Forgetting. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1517. Springer, Cham. https://doi.org/10.1007/978-3-030-92310-5_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92310-5_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92309-9

  • Online ISBN: 978-3-030-92310-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation