NE-WNA: A Novel Network Embedding Framework Without Neighborhood Aggregation

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13714))

  • 756 Accesses

Abstract

Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs. Most GNNs use the message passing mechanism to obtain a distinguished feature representation. However, due to this message passing mechanism, most existing GNNs are inherently restricted by over-smoothing and poor robustness. Therefore, we propose a simple yet effective Network Embedding framework Without Neighborhood Aggregation (NE-WNA). Specifically, NE-WNA removes the neighborhood aggregation operation from the message passing mechanism. It only takes node features as input and then obtains node representations by a simple autoencoder. We also design an enhanced neighboring contrastive (ENContrast) loss to incorporate the graph structure into the node representations. In the representation space, the ENContrast encourages low-order neighbors to be closer to the target node than high-order neighbors. Experimental results show that NE-WNA enjoys high accuracy on the node classification task and high robustness against adversarial attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/YJ199804/NE-WNA.

References

  1. Waikhom, L., Patgiri, R.: Graph neural networks: methods, applications, and opportunities. ar**v preprint ar**v:2108.10733 (2021)

  2. Feng, W., et al.: Graph random neural networks for semi-supervised learning on graphs. In: NIPS (2020)

    Google Scholar 

  3. Feng, W., et al.: GRAND+: scalable graph random neural networks. In: WWW, pp. 3248–3258 (2022)

    Google Scholar 

  4. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR (2019)

    Google Scholar 

  5. Wang, X., He, X., Wang, M., Feng, F., Chua, T.: Neural graph collaborative filtering. In: SIGIR, pp. 165–174 (2019)

    Google Scholar 

  6. Li, Q., Han, Z., Wu, X.: Deeper insights into graph convolutional networks for semi-supervised learning. In: AAAI, pp. 3538–3545 (2018)

    Google Scholar 

  7. Oono, K., Suzuki, T.: Graph neural networks exponentially lose expressive power for node classification. In: ICLR (2020)

    Google Scholar 

  8. Zhu, D., Zhang, Z., Cui, P., Zhu, W.: Robust graph convolutional networks against adversarial attacks. In: SIGKDD, pp. 1399–1407 (2019)

    Google Scholar 

  9. Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: SIGKDD, pp. 2847–2856 (2018)

    Google Scholar 

  10. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)

    Google Scholar 

  11. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: ICLR (2018)

    Google Scholar 

  12. Klicpera, J., Bojchevski, A., Günnemann, S.: Predict then Propagate: graph neural networks meet personalized PageRank. In: ICLR (2019)

    Google Scholar 

  13. Wu, F., de Souza Jr., A.H., Zhang, T., Fifty, C., Yu, T., Weinberger, K.Q.: Simplifying graph convolutional networks. In: ICML, pp. 6861–6871 (2019)

    Google Scholar 

  14. Rossi, E., Frasca, F., Chamberlain, B., Eynard, D., Bronstein, M.M., Monti, F.: SIGN: scalable inception graph neural networks. ar**v preprint ar**v:2004.11198 (2020)

  15. Zhu, H., Koniusz, P.: Simple spectral graph convolution. In: ICLR (2021)

    Google Scholar 

  16. Rong, Y., Huang, W., Xu, T., Huang, J.: DropEdge: towards deep graph convolutional networks on node classification. In: ICLR (2020)

    Google Scholar 

  17. Chen, M., Wei, Z., Huang, Z., Ding, B., Li, Y.: Simple and deep graph convolutional networks. In: ICML, pp. 1725–1735 (2020)

    Google Scholar 

  18. Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples for graph data: deep insights into attack and defense. In: IJCAI, pp. 4816–4823 (2019)

    Google Scholar 

  19. Entezari, N., Al-Sayouri, S.A., Darvishzadeh, A., Papalexakis, E.E.: All you need is low (Rank): defending against adversarial attacks on graphs. In: WSDM, pp. 169–177 (2020)

    Google Scholar 

  20. **, W., Ma, Y., Liu, X., Tang, X., Wang, S., Tang, J.: Graph structure learning for robust graph neural networks. In: SIGKDD, pp. 66–74 (2020)

    Google Scholar 

  21. Wu, L., Lin, H., Gao, Z., Tan, C., Li, S.Z.: Self-supervised on graphs: contrastive, generative, or predictive. ar**v preprint ar**v:2105.07342 (2021)

  22. Liu, Y., Pan, S., **, M., Zhou, C., **a, F., Yu, P.S.: Graph self-supervised learning: a survey. ar**v preprint ar**v:2103.00111 (2021)

  23. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. ar**v preprint ar**v:2006.04131 (2020)

  24. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: WWW, pp. 2069–2080 (2021)

    Google Scholar 

  25. Wang, D., Cui, P., Zhu, W.: Structural deep network embedding. In: SIGKDD, pp. 1225–1234 (2016)

    Google Scholar 

  26. Bo, D., Wang, X., Shi, C., Zhu, M., Lu, E., Cui, P.: Structural deep clustering network. In: WWW, pp. 1400–1410 (2020)

    Google Scholar 

  27. Hinton, G.E., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  28. Shchur, O., Mumme, M., Bojchevski, A., Günnemann, S.: Pitfalls of graph neural network evaluation. ar** knowledge networks. In: ICML, pp. 5449–5458 (2018)

    Google Scholar 

  29. Zhu, Y., Xu, W., Zhang, J., Liu, Q., Wu, S., Wang, L.: Deep graph structure learning for robust representations: a survey. ar**v preprint ar**v:2103.03036 (2021)

  30. Zügner, D., Günnemann, S.: Adversarial attacks on graph neural networks via meta learning. In: ICLR (2019)

    Google Scholar 

  31. Li, Y., **, W., Xu, H., Tang, J.: DeepRobust: a PyTorch library for adversarial attacks and defenses. ar**v preprint ar**v:2005.06149 (2020)

  32. Van Der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61972135), and the Natural Science Foundation of Heilongjiang Province in China (No. LH2020F043).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yan Yang or Yong Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Yang, Y., Liu, Y., Han, M. (2023). NE-WNA: A Novel Network Embedding Framework Without Neighborhood Aggregation. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13714. Springer, Cham. https://doi.org/10.1007/978-3-031-26390-3_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26390-3_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26389-7

  • Online ISBN: 978-3-031-26390-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation