Hierarchical Graph Contrastive Learning

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases: Research Track (ECML PKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14170))

  • 1250 Accesses

Abstract

Unsupervised graph representation learning with GNNs is critically important due to the difficulty of obtaining graph labels in many real applications. Graph contrastive learning (GCL), a recently popular method for unsupervised learning on graphs, has achieved great success on many tasks. However, existing graph-level GCL models generally focus on comparing the graph-level representation or node-level representation. The hierarchical structure property, which is ubiquitous in many real world graphs such as social networks and molecular graphs, is largely ignored. To bridge this gap, this paper proposes a novel hierarchical graph contrastive learning model named HIGCL. HIGCL uses a multi-layered architecture and contains two contrastive objectives, inner-contrasting and hierarchical-contrasting. The former conducts inner-scale contrastive learning to learn the flat structural features in each layer, while the latter focuses on performing cross-scale contrastive learning to capture the hierarchical features across layers. Extensive experiments are conducted on graph-level tasks to show the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 82.38
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 104.85
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adhikari, B., Zhang, Y., Ramakrishnan, N., Prakash, B.A.: Sub2vec: feature learning for subgraphs. In: Proceedings of PAKDD (2018)

    Google Scholar 

  2. Belghazi, M.I., et al.: Mine: mutual information neural estimation. In: Proceedings of ICML (2018)

    Google Scholar 

  3. Chang, C.C., Lin, C.J.: LibSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011)

    Google Scholar 

  4. Du, J., Wang, S., Miao, H., Zhang, J.: Multi-channel pooling graph neural networks. In: Proceedings of IJCAI (2021)

    Google Scholar 

  5. Fu, X., et al.: ACE-HGNN: adaptive curvature exploration hyperbolic graph neural network. In: Proceedings of ICDM (2021)

    Google Scholar 

  6. Gao, H., Ji, S.: Graph u-nets. IEEE Trans. Pattern Anal. Mach. Intell. 1 (2019)

    Google Scholar 

  7. Grover, A., Leskovec, J.: node2vec: Scalable feature learning for networks. In: Proceedings of KDD (2016)

    Google Scholar 

  8. Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: Proceedings of ICML (2020)

    Google Scholar 

  9. Hjelm, R.D., et al.: Learning deep representations by mutual information estimation and maximization. In: Proceedings of ICLR (2018)

    Google Scholar 

  10. Hock, F.J.: Drug Discovery and Evaluation: Pharmacological Assays. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-05392-9

    Book  Google Scholar 

  11. Hu, Z., Fan, C., Chen, T., Chang, K.W., Sun, Y.: Unsupervised pre-training of graph convolutional networks. In: Proceedings of ICLR (2019)

    Google Scholar 

  12. Kipf, T.N., Welling, M.: Variational graph auto-encoders. In: Proceedings of NeurIPS (2016)

    Google Scholar 

  13. Lee, J., Lee, I., Kang, J.: Self-attention graph pooling. In: Proceedings of ICML (2019)

    Google Scholar 

  14. Li, M., Chen, S., Zhang, Y., Tsang, I.: Graph cross networks with vertex infomax pooling. In: Proceedings of NeurIPS (2020)

    Google Scholar 

  15. Ma, T., **ao, C., Zhou, J., Wang, F.: Drug similarity integration through attentive multi-view graph auto-encoders. In: Proceedings of IJCAI (2018)

    Google Scholar 

  16. Mesquita, D., de Souza, A.H., Kaski, S.: Rethinking pooling in graph neural networks. In: Proceedings of NeurIPS (2020)

    Google Scholar 

  17. Morris, C., Kriege, N.M., Bause, F., Kersting, K., Mutzel, P., Neumann, M.: Tudataset: a collection of benchmark datasets for learning with graphs. In: ICML 2020 Workshop on Graph Representation Learning and Beyond (2020). https://www.graphlearning.io

  18. Narayanan, A., Chandramohan, M., Venkatesan, R., Chen, L., Liu, Y., Jaiswal, S.: graph2vec: Learning distributed representations of graphs. In: Proceedings of MLG (2017)

    Google Scholar 

  19. Van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. ar**v preprint ar**v:1807.03748 (2018)

  20. Pang, Y., Zhao, Y., Li, D.: Graph pooling via coarsened graph infomax. In: Proceedings of SIGIR (2021)

    Google Scholar 

  21. Shervashidze, N., Schweitzer, P., Van Leeuwen, E.J., Mehlhorn, K., Borgwardt, K.M.: Weisfeiler-Lehman graph kernels. J. Mach. Learn. Res. 12 (2011)

    Google Scholar 

  22. Shervashidze, N., Vishwanathan, S., Petri, T., Mehlhorn, K., Borgwardt, K.: Efficient graphlet kernels for large graph comparison. In: Artificial Intelligence and Statistics, pp. 488–495 (2009)

    Google Scholar 

  23. Sohn, K.: Improved deep metric learning with multi-class n-pair loss objective. In: Proceedings of NeurIPS (2016)

    Google Scholar 

  24. Sun, F.Y., Hoffman, J., Verma, V., Tang, J.: Infograph: unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In: Proceedings of ICLR (2019)

    Google Scholar 

  25. Sun, Q., et al.: Sugar: subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism. In: Proceedings of WebConf (2021)

    Google Scholar 

  26. Suresh, S., Li, P., Hao, C., Neville, J.: Adversarial graph augmentation to improve graph contrastive learning. In: Proceedings of NeurIPS (2021)

    Google Scholar 

  27. Ullmann, J.R.: An algorithm for subgraph isomorphism. J. ACM 23(1), 31–42 (1976)

    Article  MathSciNet  Google Scholar 

  28. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: Proceedings of ICLR (2018)

    Google Scholar 

  29. Wang, S., et al.: Adversarial hard negative generation for complementary graph contrastive learning. In: Proceedings of SDM (2023)

    Google Scholar 

  30. **a, J., Wu, L., Chen, J., Hu, B., Li, S.Z.: Simgrace: a simple framework for graph contrastive learning without data augmentation. In: Proceedings of the ACM Web Conference 2022 (2022)

    Google Scholar 

  31. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR (2019)

    Google Scholar 

  32. Yanardag, P., Vishwanathan, S.: Deep graph kernels. In: Proceedings of KDD (2015)

    Google Scholar 

  33. Ying, Z., You, J., Morris, C., Ren, X., Hamilton, W., Leskovec, J.: Hierarchical graph representation learning with differentiable pooling. In: Proceedings of NeurIPS (2018)

    Google Scholar 

  34. You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., Shen, Y.: Graph contrastive learning with augmentations. In: Proceedings of NeurIPS (2020)

    Google Scholar 

  35. Zhang, S., Hu, Z., Subramonian, A., Sun, Y.: Motif-driven contrastive learning of graph representations. In: Proceedings of AAAI (2021)

    Google Scholar 

  36. Zhao, X., et al.: Multi-view tensor graph neural networks through reinforced aggregation. IEEE Trans. Knowl. Data Eng. 35 (2022)

    Google Scholar 

  37. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. In: Proceedings of ICML (2020)

    Google Scholar 

  38. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: Proceedings of WebConf (2021)

    Google Scholar 

Download references

Acknowledgement

This research was funded by the National Science Foundation of China (No. 62172443), Open Project of **angjiang Laboratory (22XJ02002, 22XJ03025), Hunan Provincial Natural Science Foundation of China (No. 2022JJ30053) and the High Performance Computing Center of Central South University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Senzhang Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yan, H., Wang, S., Yin, J., Li, C., Zhu, J., Wang, J. (2023). Hierarchical Graph Contrastive Learning. In: Koutra, D., Plant, C., Gomez Rodriguez, M., Baralis, E., Bonchi, F. (eds) Machine Learning and Knowledge Discovery in Databases: Research Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14170. Springer, Cham. https://doi.org/10.1007/978-3-031-43415-0_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43415-0_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43414-3

  • Online ISBN: 978-3-031-43415-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation