Log in

State of the art on adversarial attacks and defenses in graphs

  • Review
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Graph neural networks (GNNs) had shown excellent performance in complex graph data modelings such as node classification, link prediction and graph classification. However, GNNs are vulnerable to adversarial attacks resulting in severe performance degradation, which brings many security and privacy issues. Such vulnerability of GNNs limits its application in safety–critical fields such as finance and transportation. Studying the principles and implementation of graph adversarial attacks and their countermeasures can provide insight into the reason behind the vulnerability of GNNs and consequently improve the robustness and generalization of the model. This paper introduces the related concepts of existing graph adversarial attack and defense algorithms and analyzes the basic idea and implementation of each algorithm. Moreover, we compare the strategies, target tasks, advantages and disadvantages of typical algorithms. Through the summary of the state of the art, the limitations and possible development directions of graph adversarial attacks and defenses are analyzed, which provides a useful reference for the further develo** relative researches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  2. Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V, Martinez-Gonzalez P, Garcia-Rodriguez J (2018) A survey on deep learning techniques for image and video semantic segmentation. Appl Soft Comput 70:41–65

    Article  Google Scholar 

  3. Devlin J, Chang MW, Lee K, Toutanova K (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp 4171–4186

  4. Zhang S, Yao L, Sun A, Tay Y (2019) Deep learning based recommender system: a survey and new perspectives. ACM Comput Surv (CSUR) 52(1):1–38

    Article  Google Scholar 

  5. Wu Z, Pan S, Chen F, Long G, Zhang C, Philip SY (2020) A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst 32(1):4–24

    Article  MathSciNet  Google Scholar 

  6. Zhou J et al (2020) Graph neural networks: a review of methods and applications. AI Open 1:57–81

    Article  Google Scholar 

  7. Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK (2020) Adversarial attacks and defenses in images, graphs and text: a review. Int J Autom Comput 17(2):151–178

    Article  Google Scholar 

  8. Akhtar N, Mian A (2018) Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6:14410–14430

    Article  Google Scholar 

  9. Gao J, Lanchantin J, Soffa ML, Qi Y (2018) Black-box generation of adversarial text sequences to evade deep learning classifiers. In: 2018 IEEE security and privacy workshops (SPW). IEEE, pp 50–56

  10. Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G (2008) The graph neural network model. IEEE Trans Neural Netw 20(1):61–80

    Article  Google Scholar 

  11. Zügner D, Akbarnejad A, Günnemann S (2018) Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp 2847–2856

  12. Guo H, Tang R, Ye Y, Li Z, He X (2017) A graph-based push service platform. In: International conference on database systems for advanced applications. Springer, pp 636–648

  13. Dai H, Li H, Tian T, Huang X, Wang L, Zhu J, Song L (2018) Adversarial attack on graph structured data. In: International conference on machine learning. PMLR, pp 1115–1124

  14. Ma Y, Wang S, Derr T, Wu L, Tang J (2019) Attacking graph convolutional networks via rewiring. ar**v:1906.03750

  15. Chen J, Chen L, Chen Y, Zhao M, Yu S, Xuan Q, Yang X (2019) GA-based Q-attack on community detection. IEEE Trans Comput Soc Syst 6(3):491–503

    Article  Google Scholar 

  16. Chen J, Chen Y, Chen L, Zhao M, Xuan Q (2020) Multiscale evolutionary perturbation attack on community detection. IEEE Trans Comput Soc Syst 8(1):62–75

    Article  Google Scholar 

  17. Yu S, Zheng J, Chen J, Xuan Q, Zhang Q (2020). Unsupervised euclidean distance attack on network embedding. In: 2020 IEEE fifth international conference on data science in cyberspace (DSC). IEEE, pp 71–77

  18. Chen J, Wu Y, Xu X, Chen Y, Zheng H, Xuan Q (2018) Fast gradient attack on network embedding. ar**v:1809.02797

  19. Chen J, Lin X, Shi Z, Liu Y (2020) Link prediction adversarial attack via iterative gradient attack. IEEE Trans Comput Soc Syst 7(4):1081–1094

    Article  Google Scholar 

  20. Sun M, Tang J, Li H, Li B, **ao C, Chen Y, Song D (2018) Data poisoning attack against unsupervised node embedding methods. ar**v:1810.12881

  21. Xu K, Chen H, Liu S, Chen PY, Weng TW, Hong M, Lin X (2019) Topology attack and defense for graph neural networks: An optimization perspective. ar**v:1906.04214

  22. Chen J, Chen Y, Zheng H, Shen S, Yu S, Zhang D, Xuan Q (2020) MGA: Momentum gradient attack on network. IEEE Trans Comput Soc Syst 8(1):99–109

    Article  Google Scholar 

  23. Bojchevski A, Günnemann S (2019) Adversarial attacks on node embeddings via graph poisoning. In: International conference on machine learning. PMLR, pp 695–704

  24. Wang B, Gong NZ (2019) Attacking graph-based classification via manipulating the graph structure. In: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, pp 2023–2040

  25. Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765–1773

  26. Zang X, **e Y, Chen J, Yuan B (2020) Graph universal adversarial attacks: a few bad actors ruin graph learning models. ar**v:2002.04784

  27. Zügner D, Günnemann S (2019) Adversarial attacks on graph neural networks via meta learning. ar**v:1902.08412

  28. Li J, Zhang H, Han Z, Rong Y, Cheng H, Huang J (2020) Adversarial attack on community detection by hiding individuals. Proc Web Conf 2020:917–927

    Google Scholar 

  29. Waniek M, Michalak TP, Wooldridge MJ, Rahwan T (2018) Hiding individuals and communities in a social network. Nat Hum Behav 2(2):139–147

    Article  Google Scholar 

  30. Zhou K, Michalak TP, Waniek M, Rahwan T, Vorobeychik Y (2019) Attacking similarity-based link prediction in social networks. In: 18th international conference on autonomous agents and multiagent systems. AAMAS, pp 305–313

  31. Liu X, Si S, Zhu X, Li Y, Hsieh CJ (2019) A unified framework for data poisoning attack to graph-based semi-supervised learning. In: Proceedings of the 33rd international conference on neural information processing systems, pp 9780–9790

  32. Chang H et al (2020) A restricted black-box adversarial framework towards attacking graph embedding models. In: Proceedings of the AAAI conference on artificial intelligence, 34(04): 3389–3396

  33. Takahashi T (2019) Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In: 2019 IEEE international conference on big data (big data). IEEE, pp 1395–1400

  34. Chen Y, Nadji Y, Kountouras A, Monrose F, Perdisci R, Antonakakis M, Vasiloglou N (2017) Practical attacks against graph-based clustering. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp 1125–1142

  35. Entezari N, Al-Sayouri SA, Darvishzadeh A, Papalexakis EE (2020) All you need is low (rank) defending against adversarial attacks on graphs. In: Proceedings of the 13th international conference on web search and data mining, pp 169–177

  36. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. ar**v:1412.6572

  37. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387

  38. Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K, Zhu L (2019) Adversarial examples for graph data: deep insights into attack and defense. In: Proceedings of the 28th international joint conference on artificial intelligence, pp 4816–4823

  39. Wang J, Luo M, Suya F, Li J, Yang Z, Zheng Q (2020) Scalable attack on graph data by injecting vicious nodes. Data Min Knowl Discov 34(5):1363–1389

    Article  MathSciNet  MATH  Google Scholar 

  40. Wang X, Cheng M, Eaton J, Hsieh CJ, Wu F (2018) Attack graph convolutional networks by adding fake nodes. ar**v:1810.10751

  41. Sun Y, Wang S, Tang X, Hsieh TY, Honavar V (2019) Node injection attacks on graphs via reinforcement learning. ar**v:1909.06543

  42. Yang S, Doan BG, Montague P, De Vel O, Abraham T, Camtepe S, Ranasinghe DC, Kanhere SS (2022) Transferable graph backdoor attack. In Proceedings of the 25th international symposium on research in attacks, intrusions and defenses, pp 321–332

  43. ** Z, Pang R, Ji S, Wang T (2021) Graph backdoor. In: 30th USENIX security symposium (USENIX Security 21), pp 1523–1540

  44. Zhang Z, Jia J, Wang B, Gong NZ (2021) Backdoor attacks to graph neural networks. In: Proceedings of the 26th ACM symposium on access control models and technologies, pp 15–26

  45. Yan Z, Li G, TIan Y, Wu J, Li S, Chen M, Poor HV (2021) DeHiB: deep hidden backdoor attack on semi-supervised learning via adversarial perturbation. In: Proceedings of the AAAI conference on artificial intelligence, 35(12), pp 10585–10593

  46. Xu J, Xue M, Picek S (2021) Explainability-based backdoor attacks against graph neural networks. In: Proceedings of the 3rd ACM workshop on wireless security and machine learning, pp 31–36

  47. Xu J, Wang R, Liang K, Picek S (2022) More is better (mostly): on the backdoor attacks in federated graph neural networks. ar**v:2202.03195

  48. Zhang H, Zheng T, Gao J, Miao C, Su L, Li Y, Ren K (2019) Data poisoning attack against knowledge graph embedding. In: Proceedings of the 28th international joint conference on artificial intelligence, pp 4853–4859

  49. Fang M, Yang G, Gong NZ, Liu J (2018) Poisoning attacks to graph-based recommender systems. In: Proceedings of the 34th annual computer security applications conference, pp 381–392

  50. Zhou Q, Ren Y, **a T, Yuan L, Chen L (2019) Data poisoning attacks on graph convolutional matrix completion. In: International conference on algorithms and architectures for parallel processing. Springer, pp 427–439

  51. Yu J, Gao M, Rong W, Li W, **ong Q, Wen J (2017) Hybrid attacks on model-based social recommender systems. Phys A Stat Mech Appl 483:171–181

    Article  Google Scholar 

  52. Chen L, Xu Y, **e F, Huang M, Zheng Z (2021) Data poisoning attacks on neighborhood-based recommender systems. Trans Emerg Telecommun Technol 32(6):e3872

    Google Scholar 

  53. Fan W et al (2021) Attacking black-box recommendations via copying cross-domain user profiles. In: 2021 IEEE 37th international conference on data engineering (ICDE). IEEE, pp 1583–1594

  54. Gaitonde J, Kleinberg J, Tardos E (2020) Adversarial perturbations of opinion dynamics in networks. In: Proceedings of the 21st ACM conference on economics and computation, pp 471–472

  55. Hou S et al (2019) αcyber: enhancing robustness of android malware detection system against adversarial attacks on heterogeneous graph based model. In: Proceedings of the 28th ACM international conference on information and knowledge management, pp 609–618

  56. Chen J, Zhang J, Chen Z, Du M, Xuan Q (2021) Time-aware gradient attack on dynamic network link prediction. IEEE Trans Knowl Data Eng 01:1–1

    Google Scholar 

  57. Breuer A, Eilat R, Weinsberg U (2020) Friend or faux: graph-based early detection of fake accounts on social networks. Proc Web Conf 2020:1287–1297

    Google Scholar 

  58. Zhang Y, Khan S, Coates M (2019) Comparing and detecting adversarial attacks for graph deep learning. In: Proc. representation learning on graphs and manifolds workshop, Int. Conf. learning representations, New Orleans, LA, USA

  59. Ioannidis VN, Berberidis D, Giannakis GB (2019) Graphsac: detecting anomalies in large-scale graphs. ar**v:1910.09589

  60. Zhang S, Yin H, Chen T, Hung QVN, Huang Z, Cui L (2020) Gcn-based user representation learning for unifying robust recommendation and fraudster detection. In: Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, pp 689–698

  61. Pezeshkpour P, Tian Y, Singh S (2019) Investigating robustness and interpretability of link prediction via adversarial modifications. ar**v:1905.00563

  62. Gao P, Wang B, Gong, NZ, Kulkarni SR, Thomas K, Mittal P (2018) Sybilfuse: combining local attributes with global structure to perform robust sybil detection. In: 2018 IEEE conference on communications and network security (CNS). IEEE, pp 1–9

  63. Feng F, He X, Tang J, Chua TS (2019) Graph adversarial training: dynamically regularizing based on graph structure. IEEE Trans Knowl Data Eng 33(6):2493–2504

    Article  Google Scholar 

  64. Dai Q, Li Q, Tang J, Wang D (2018) Adversarial network embedding. In: Proceedings of the AAAI conference on artificial intelligence, 32(1)

  65. Dai Q, Shen X, Zhang L, Li Q, Wang D (2019) Adversarial training methods for network embedding. In: The world wide web conference, pp 329–339

  66. Chen J, Wu Y, Lin X, Xuan Q (2019) Can adversarial network attack be defended? ar**v:1903.05994

  67. Wang X, Liu X, Hsieh CJ (2019) Graphdefense: towards robust graph convolutional networks. ar**v:1911.04429

  68. ** H, Zhang X (2019) Latent adversarial training of graph convolution networks. In: ICML workshop on learning and reasoning with graph-structured representations

  69. Sun K, Lin Z, Guo H, Zhu Z (2019) Virtual adversarial training on graph convolutional networks in node classification. In: Chinese conference on pattern recognition and computer vision (PRCV). Springer, pp 431–443

  70. Deng Z, Dong Y, Zhu J (2019) Batch virtual adversarial training for graph convolutional networks. ar**v:1902.09192

  71. Zhou K, Michalak TP, Vorobeychik Y (2019) Adversarial robustness of similarity-based link prediction. In: 2019 IEEE international conference on data mining (ICDM). IEEE, pp 926–935

  72. Minervini P, Demeester T, Rocktäschel T, Riedel S (2017) Adversarial sets for regularising neural link predictors. In: UAI2017, the 33rd conference on uncertainty in artificial intelligence, pp 1–10

  73. Lecuyer M, Atlidakis V, Geambasu R, Hsu D, Jana S (2019) Certified robustness to adversarial examples with differential privacy. In: 2019 IEEE symposium on security and privacy (SP), IEEE, pp 656–672

  74. Zügner D, Günnemann S (2019) Certifiable robustness and robust training for graph convolutional networks. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp 246–256

  75. Zügner D, Günnemann S (2020) Certifiable robustness of graph convolutional networks under structure perturbations. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp 1656–1665

  76. Bojchevski A, Günnemann S (2019) Certifiable robustness to graph perturbations. In: Proceedings of the 33rd international conference on neural information processing systems, pp 8319–8330

  77. Bojchevski A, Klicpera J, Günnemann S (2020) Efficient robustness certificates for discrete data: sparsity-aware randomized smoothing for graphs, images and more. In: International conference on machine learning. PMLR, pp 1003–1013

  78. Jia J, Wang B, Cao X, Gong NZ (2020) Certified robustness of community detection against adversarial structural perturbation via randomized smoothing. Proc Web Conf 2020:2718–2724

    Google Scholar 

  79. Cohen J, Rosenfeld E, Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: International conference on machine learning. PMLR, pp 1310–1320

  80. Kang Z, Pan H, Hoi SC, Xu Z (2019) Robust graph learning from noisy data. IEEE Trans cybern 50(5):1833–1843

    Article  Google Scholar 

  81. Miller BA, Çamurcu M, Gomez AJ, Chan K, Eliassi-Rad T (2019) Improving robustness to attacks against vertex classification. In: Proc. MLG workshop, pp 1–8

  82. ** W, Ma Y, Liu X, Tang X, Wang S, Tang J (2020) Graph structure learning for robust graph neural networks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp 66–74

  83. Zhang A, Ma J (2020) Defensevgae: defending against adversarial attacks on graph data via a variational graph autoencoder. ar**v:2006.08900

  84. Luo D, Cheng W, Yu W, Zong B, Ni J, Chen H, Zhang X (2021) Learning to drop: robust graph neural network via topological denoising. In: Proceedings of the 14th ACM international conference on web search and data mining, pp 779–787

  85. Ioannidis VN, Marques AG, Giannakis GB (2020) Tensor graph convolutional networks for multi-relational and robust learning. IEEE Trans Signal Process 68:6535–6546

    Article  MathSciNet  MATH  Google Scholar 

  86. Zhang X, Zitnik M (2020) Gnnguard: defending graph neural networks against adversarial attacks. Adv Neural Inf Process Syst 33:9263–9275

    Google Scholar 

  87. Wang S, Chen Z, Ni J, Yu X, Li Z, Chen H, Yu PS (2019) Adversarial defense framework for graph neural network. ar**v:1905.03679

  88. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  89. Zhu D, Zhang Z, Cui P, Zhu W (2019) Robust graph convolutional networks against adversarial attacks. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp 1399–1407

  90. Tang X, Li Y, Sun Y, Yao H, Mitra P, Wang S (2020) Transferring robustness for graph neural network against poisoning attacks. In: Proceedings of the 13th international conference on web search and data mining, pp 600–608

  91. ** M, Chang H, Zhu W, Sojoudi S (2019) Power up! robust graph convolutional network against evasion attacks based on graph powering. ar**v:1905.10029

  92. Liu X et al (2021) Elastic graph neural networks. In: International conference on machine learning. PMLR, pp 6837–6849

  93. Feng W et al (2020) Graph random neural networks for semi-supervised learning on graphs. Adv Neural Inf Process Syst 33:22092–22103

    Google Scholar 

  94. Wang Y, Liu S, Yoon M, Lamba H, Wang W, Faloutsos C, Hooi B (2020) Provably robust node classification via low-pass message passing. In: 2020 IEEE international conference on data mining (ICDM). IEEE, pp 621–630

  95. Yu S et al (2019) Target defense against link-prediction-based attacks via evolutionary perturbations. IEEE Trans Knowl Data Eng 33(2):754–767

    Google Scholar 

  96. Lim M, Abdullah A, Jhanjhi N, Supramaniam M (2019) Hidden link prediction in criminal networks using the deep reinforcement learning technique. Computers 8(1):8

    Article  Google Scholar 

  97. He X, Jia J, Backes M, Gong NZ, Zhang Y (2021) Stealing links from graph neural networks. In: 30th USENIX security symposium (USENIX security 21), pp 2669–2686

  98. Zhou Z, Li Y, Li J, Yu K, Kou G, Wang M, Gupta BB (2022) Gan-siamese network for cross-domain vehicle re-identification in intelligent transport systems. IEEE transactions on network science and engineering, pp 1–12

  99. Zhou Z, Wang Y, Wu QJ, Yang CN, Sun X (2016) Effective and efficient global context verification for image copy detection. IEEE Trans Inf Forensics Secur 12(1):48–63

    Article  Google Scholar 

  100. Zhou Z, Su Y, Li J, Yu K, Wu QJ, Fu Z, Shi Y (2022) Secret-to-image reversible transformation for generative steganography. IEEE Trans Dependable Secure Comput. https://doi.org/10.1109/TDSC.2022.3217661

    Article  Google Scholar 

  101. Kenlay H, Thanou D, Dong X (2020) On the stability of polynomial spectral graph filters. In: ICASSP 2020–2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 5350–5354

  102. Logins A, Li Y, Karras P (2020) On the robustness of cascade diffusion under node attacks. Proc Web Conf 2020:2711–2717

    Google Scholar 

  103. Kenlay H, Thanou D, Dong X (2021) Interpretable stability bounds for spectral graph filters. In: International conference on machine learning. PMLR, pp 5388–5397

  104. You Y, Chen T, Sui Y, Chen T, Wang Z, Shen Y (2020) Graph contrastive learning with augmentations. Adv Neural Inf Process Syst 33:5812–5823

    Google Scholar 

  105. Geisler S, Zügner D, Günnemann S (2020) Reliable graph neural networks via robust aggregation. Adv Neural Inf Process Syst 33:13272–13284

    Google Scholar 

  106. **e Y, Li S, Yang C, Wong RCW, Han J (2020) When do gnns work: understanding and improving neighborhood aggregation. In: IJCAI international joint conference on artificial intelligence, pp 1303–1309

  107. ** W, Derr T, Wang Y, Ma Y, Liu Z, Tang J (2021) Node similarity preserving graph convolutional networks. In: Proceedings of the 14th ACM international conference on web search and data mining, pp 148–156

  108. Lo WW, Layeghy S, Sarhan M, Gallagher M, Portmann M (2021) E-graphsage: a graph neural network based intrusion detection system. ar**v:2103.16329

  109. Chen Y, Wu L, Zaki M (2020) Iterative deep graph learning for graph neural networks: better and robust node embeddings. Adv Neural Inf Process Syst 33:19314–19326

    Google Scholar 

  110. Li K, Luo G, Ye Y, Li W, Ji S, Cai Z (2020) Adversarial privacy-preserving graph embedding against inference attack. IEEE Internet Things J 8(8):6904–6915

    Article  Google Scholar 

  111. Kumar C, Ryan R, Shao M (2020) Adversary for social good: protecting familial privacy through joint adversarial attacks. In: Proceedings of the AAAI conference on artificial intelligence, 34(07):11304–11311

Download references

Acknowledgements

This work was supported by National Nature Science Foundation of China (No. 61502262).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhengli Zhai.

Ethics declarations

Conflict of interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhai, Z., Li, P. & Feng, S. State of the art on adversarial attacks and defenses in graphs. Neural Comput & Applic 35, 18851–18872 (2023). https://doi.org/10.1007/s00521-023-08839-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08839-9

Keywords

Navigation