Generative Subgraph Contrast for Self-Supervised Graph Representation Learning

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13690))

Included in the following conference series:

Abstract

Contrastive learning has shown great promise in the field of graph representation learning. By manually constructing positive/negative samples, most graph contrastive learning methods rely on the vector inner product based similarity metric to distinguish the samples for graph representation. However, the handcrafted sample construction (e.g., the perturbation on the nodes or edges of the graph) may not effectively capture the intrinsic local structures of the graph. Also, the vector inner product based similarity metric cannot fully exploit the local structures of the graph to characterize the graph difference well. To this end, in this paper, we propose a novel adaptive subgraph generation based contrastive learning framework for efficient and robust self-supervised graph representation learning, and the optimal transport distance is utilized as the similarity metric between the subgraphs. It aims to generate contrastive samples by capturing the intrinsic structures of the graph and distinguish the samples based on the features and structures of subgraphs simultaneously. Specifically, for each center node, by adaptively learning relation weights to the nodes of the corresponding neighborhood, we first develop a network to generate the interpolated subgraph. We then construct the positive and negative pairs of subgraphs from the same and different nodes, respectively. Finally, we employ two types of optimal transport distances (i.e., Wasserstein distance and Gromov-Wasserstein distance) to construct the structured contrastive loss. Extensive node classification experiments on benchmark datasets verify the effectiveness of our graph contrastive learning method. Source code is available at https://github.com/yh-han/GSC.git.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 79.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 99.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Benamou, J.D., Carlier, G., Cuturi, M., Nenna, L., Peyré, G.: Iterative Bregman projections for regularized transportation problems. SIAM J. Sci. Comput. 37(2), A1111–A1138 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Chen, L., Gan, Z., Cheng, Y., Li, L., Carin, L., Liu, J.: Graph optimal transport for cross-domain alignment. In: International Conference on Machine Learning, pp. 1542–1553. PMLR (2020)

    Google Scholar 

  3. Chollet, F.: Deep learning with Python. Simon and Schuster (2017)

    Google Scholar 

  4. Chowdhury, S., Mémoli, F.: The Gromov-Wasserstein distance between networks and stable network invariants. Inf. Inference J. IMA 8(4), 757–787 (2019)

    MathSciNet  MATH  Google Scholar 

  5. Chu, G., Wang, X., Shi, C., Jiang, X.: CuCo: graph representation with curriculum contrastive learning. In: IJCAI (2021)

    Google Scholar 

  6. Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in Neural Information Processing Systems, vol. 26, pp. 2292–2300 (2013)

    Google Scholar 

  7. Faerman, E., Voggenreiter, O., Borutta, F., Emrich, T., Berrendorf, M., Schubert, M.: Graph alignment networks with node matching scores. In: Proceedings of Advances in Neural Information Processing Systems (NIPS) (2019)

    Google Scholar 

  8. Fey, M., Lenssen, J.E.: Fast graph representation learning with PyTorch geometric. ar**v preprint ar**v:1903.02428 (2019)

  9. Hafidi, H., Ghogho, M., Ciblat, P., Swami, A.: GraphCL: contrastive self-supervised learning of graph representations. ar**v preprint ar**v:2007.08025 (2020)

  10. Hamilton, W.L., Ying, R., Leskovec, J.: Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025–1035 (2017)

    Google Scholar 

  11. Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: methods and applications. ar**v preprint ar**v:1709.05584 (2017)

  12. Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: International Conference on Machine Learning, pp. 4116–4126. PMLR (2020)

    Google Scholar 

  13. Hjelm, R.D., et al.: Learning deep representations by mutual information estimation and maximization. ar**v preprint ar**v:1808.06670 (2018)

  14. Jiao, Y., **ong, Y., Zhang, J., Zhang, Y., Zhang, T., Zhu, Y.: Sub-graph contrast for scalable self-supervised graph representation learning. ar**v preprint ar**v:2009.10273 (2020)

  15. **, W., et al.: Self-supervised learning on graphs: deep insights and new direction. ar**v preprint ar**v:2006.10141 (2020)

  16. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. ar**v preprint ar**v:1412.6980 (2014)

  17. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. ar**v preprint ar**v:1609.02907 (2016)

  18. Kipf, T.N., Welling, M.: Variational graph auto-encoders. ar**v preprint ar**v:1611.07308 (2016)

  19. Lee, J., Lee, I., Kang, J.: Self-attention graph pooling. In: International Conference on Machine Learning, pp. 3734–3743. PMLR (2019)

    Google Scholar 

  20. Lee, N., Lee, J., Park, C.: Augmentation-free self-supervised learning on graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 7372–7380 (2022)

    Google Scholar 

  21. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)

    Google Scholar 

  22. Manessi, F., Rozza, A.: Graph-based neural network models with multiple self-supervised auxiliary tasks. Pattern Recogn. Lett. 148, 15–21 (2021)

    Article  Google Scholar 

  23. Peng, Z., et al.: Graph representation learning via graphical mutual information maximization. In: Proceedings of the Web Conference 2020, pp. 259–270 (2020)

    Google Scholar 

  24. Perozzi, B., Al-Rfou, R., Skiena, S.: DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 701–710 (2014)

    Google Scholar 

  25. Peyré, G., Cuturi, M., Solomon, J.: Gromov-Wasserstein averaging of kernel and distance matrices. In: International Conference on Machine Learning, pp. 2664–2672. PMLR (2016)

    Google Scholar 

  26. Peyré, G., Cuturi, M., et al.: Computational optimal transport: with applications to data science. Found. Trends® Mach. Learn. 11(5–6), 355–607 (2019)

    Google Scholar 

  27. Qiu, J., et al.: GCC: graph contrastive coding for graph neural network pre-training. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1150–1160 (2020)

    Google Scholar 

  28. Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29(3), 93 (2008)

    Google Scholar 

  29. Sun, F.Y., Hoffmann, J., Verma, V., Tang, J.: InfoGraph: unsupervised and semi-supervised graph-level representation learning via mutual information maximization. ar**v preprint ar**v:1908.01000 (2019)

  30. Suresh, S., Li, P., Hao, C., Neville, J.: Adversarial graph augmentation to improve graph contrastive learning. ar**v preprint ar**v:2106.05819 (2021)

  31. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. ar**v preprint ar**v:1710.10903 (2017)

  32. Veličković, P., Fedus, W., Hamilton, W.L., Liò, P., Bengio, Y., Hjelm, R.D.: Deep graph infomax. ar**v preprint ar**v:1809.10341 (2018)

  33. Wang, H., Zhang, J., Zhu, Q., Huang, W.: Augmentation-free graph contrastive learning. ar**v preprint ar**v:2204.04874 (2022)

  34. Wu, L., Lin, H., Gao, Z., Tan, C., Li, S., et al.: Self-supervised on graphs: contrastive, generative, or predictive. ar**v preprint ar**v:2105.07342 (2021)

  35. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2020)

    Article  MathSciNet  Google Scholar 

  36. Xu, M., Wang, H., Ni, B., Guo, H., Tang, J.: Self-supervised graph-level representation learning with local and global structure. ar**v preprint ar**v:2106.04113 (2021)

  37. Yin, Y., Wang, Q., Huang, S., **ong, H., Zhang, X.: AutoGCL: automated graph contrastive learning via learnable view generators. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 8892–8900 (2022)

    Google Scholar 

  38. You, Y., Chen, T., Shen, Y., Wang, Z.: Graph contrastive learning automated. ar**v preprint ar**v:2106.07594 (2021)

  39. You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., Shen, Y.: Graph contrastive learning with augmentations. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  40. Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.: GraphSAINT: graph sampling based inductive learning method. ar**v preprint ar**v:1907.04931 (2019)

  41. Zhang, M., Chen, Y.: Link prediction based on graph neural networks. In: Advances in Neural Information Processing Systems, vol. 31, pp. 5165–5175 (2018)

    Google Scholar 

  42. Zhao, H., Yang, X., Wang, Z., Yang, E., Deng, C.: Graph debiased contrastive learning with joint representation clustering. In: IJCAI (2021)

    Google Scholar 

  43. Zhu, Q., Du, B., Yan, P.: Self-supervised training of graph convolutional networks. ar**v preprint ar**v:2006.02380 (2020)

  44. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. ar**v preprint ar**v:2006.04131 (2020)

  45. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: Proceedings of the Web Conference 2021, pp. 2069–2080 (2021)

    Google Scholar 

  46. Zitnik, M., Leskovec, J.: Predicting multicellular function through multi-layer tissue networks. Bioinformatics 33(14), i190–i198 (2017)

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank reviewers for their detailed comments and instructive suggestions. This work was supported by the National Science Fund of China (Grant Nos. 61876084, 61876083, 62176124).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jianjun Qian or ** **e .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 3451 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Han, Y., Hui, L., Jiang, H., Qian, J., **e, J. (2022). Generative Subgraph Contrast for Self-Supervised Graph Representation Learning. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13690. Springer, Cham. https://doi.org/10.1007/978-3-031-20056-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20056-4_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20055-7

  • Online ISBN: 978-3-031-20056-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation