On Effectiveness of the Adversarial Attacks on the Computer Systems of Biomedical Images Classification

  • Conference paper
  • First Online:
Distributed Computer and Communication Networks (DCCN 2022)

Abstract

The problems of vulnerability of the computer systems of biomedical images classification to adversarial attacks on are investigated. The aim of the work is to study the effectiveness of the impact of various models of adversarial attacks on biomedical images and the values of control parameters of algorithms for generating their attacking versions. The effectiveness of attacks prepared using the projected gradient descent algorithm (PGD), Deep Fool (DF) algorithm and Carlini-Wagner algorithm (CW) is investigated. Experimental studies were carried out on the example of solving typical problems of medical images classification using deep neural networks VGG16, EfficientNetB2, DenseNet121, Xception, ResNet50 as well as data containing chest X-rays images and brain MRI-scan images.

Our findings in this work are as follows. Deep models were very susceptible to adversarial attacks, which led to decrease of the accuracy classification of the models for all datasets. Prior to the use of adversarial methods, we achieved a classification accuracy of 93.6% for brain MRI and 99.1% for chest X-rays. During the DF attack the accuracy of the VGG16 model showed a maximum absolute decrease of 49.8% for MRI-scans, and 57.3% for chest X-rays images. The gradient descent (PGD) algorithm with the same values of malicious image disturbances is less effective than the DF and the CW adversarial attacks. VGG16 deep model is more effective in accuracy classification on considered datasets and most vulnerable to adversarial attacks among other deep models. We hope that these results would be useful to design more robust and secure medical deep learning systems.

Y. Blinkov—This paper has been supported by the RUDN University Strategic Academic Leadership Program.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (Canada)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Asgari Taghanaki, S., Das, A., Hamarneh, G.: Vulnerability analysis of chest X-ray image classification against adversarial attacks. In: Stoyanov, D., et al. (eds.) MLCN/DLF/IMIMIC 2018. LNCS, vol. 11038, pp. 87–94. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-02628-8_10

    Chapter  Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/SP.2017.49

  3. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey (2018). https://doi.org/10.48550/arxiv.1810.00069

  4. Fawzi, A., Fawzi, O., Frossard, P.: Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 107(3), 481–508 (2017). https://doi.org/10.1007/s10994-017-5663-3

    Article  MathSciNet  MATH  Google Scholar 

  5. Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287–1289 (2019). https://doi.org/10.1126/science.aaw4399. https://www.science.org/doi/abs/10.1126/science.aaw4399

  6. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/e2c420d928d4bf8ce0ff2ec19b371514-Paper.pdf

  7. Jain, G., Mittal, D., Thakur, D., Mittal, M.K.: A deep learning approach to detect Covid-19 coronavirus with X-ray images. Biocybern. Biomed. Eng. 40(4), 1391–1405 (2020). https://doi.org/10.1016/j.bbe.2020.08.008. https://www.sciencedirect.com/science/article/pii/S0208521620301005

  8. Kermany, D.S., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131.e9 (2018). https://doi.org/10.1016/j.cell.2018.02.010. https://www.sciencedirect.com/science/article/pii/S0092867418301545

  9. Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017). https://doi.org/10.1016/j.media.2017.07.005. https://www.sciencedirect.com/science/article/pii/S1361841517301135

  10. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017). https://doi.org/10.48550/ARXIV.1706.06083

  11. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016). https://doi.org/10.1109/CVPR.2016.282

  12. Ozdag, M.: Adversarial attacks and defenses against deep neural networks: a survey. Procedia Comput. Sci. 140, 152–161 (2018). https://doi.org/10.1016/j.procs.2018.10.315

    Article  Google Scholar 

  13. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS 2017, pp. 506–519. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3052973.3053009

  14. Papernot, N., McDaniel, P.D., Goodfellow, I.J.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. CoRR abs/1605.07277 (2016). http://arxiv.org/abs/1605.07277

  15. Shchetinin, E.Y., Sevastianov, L.A.: On transfer learning methods in biomedical image classification tasks. Informatika i ee Primeneniya 15(4), 59–64 (2021). https://doi.org/10.14357/19922264210408

  16. Shchetinin, E.Y., Sevastianov, L.A.: Automated detection of Covid-19 coronavirus infection based on analysis of chest X-ray images by deep learning methods. Tomsk State Univ. J. Control Comput. Sci. 58, 98–105 (2022). https://doi.org/10.17223/19988605/58/9

  17. Shi, Y., Wang, S., Han, Y.: Curls & Whey: boosting black-box adversarial attacks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6512–6520 (2019). https://doi.org/10.1109/CVPR.2019.00668

  18. Siar, M., Teshnehlab, M.: Brain tumor detection using deep neural network and machine learning algorithm. In: 2019 9th International Conference on Computer and Knowledge Engineering (ICCKE), pp. 363–368 (2019). https://doi.org/10.1109/ICCKE48569.2019.8964846

  19. Wang, H., Yu, C.: A direct approach to robust deep learning using adversarial networks. CoRR abs/1905.09591 (2019). http://arxiv.org/abs/1905.09591

  20. Wang, J., Yin, Z., Tang, J., Jiang, J., Luo, B.: PICA: a pixel correlation-based attentional black-box adversarial attack. CoRR abs/2101.07538 (2021). https://arxiv.org/abs/2101.07538

  21. Yang, J., Jiang, Y., Huang, X., Ni, B., Zhao, C.: Learning black-box attackers with transferable priors and query feedback. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS 2020, vol. 33, pp. 12288–12299. Curran Associates Inc., Red Hook (2020)

    Google Scholar 

  22. Yuan, J., Zhou, S., Lin, L., Wang, F., Cui, J.: Black-box adversarial attacks against deep learning based malware binaries detection with GAN. In: Giacomo, G.D., et al. (eds.) 24th European Conference on Artificial Intelligence, ECAI 2020, Santiago de Compostela, Spain, 29 August–8 September 2020, Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020). Frontiers in Artificial Intelligence and Applications, vol. 325, pp. 2536–2542. IOS Press (2020). https://doi.org/10.3233/FAIA200388

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yury A. Blinkov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shchetinin, E.Y., Glushkova, A.G., Blinkov, Y.A. (2023). On Effectiveness of the Adversarial Attacks on the Computer Systems of Biomedical Images Classification. In: Vishnevskiy, V.M., Samouylov, K.E., Kozyrev, D.V. (eds) Distributed Computer and Communication Networks. DCCN 2022. Communications in Computer and Information Science, vol 1748. Springer, Cham. https://doi.org/10.1007/978-3-031-30648-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30648-8_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30647-1

  • Online ISBN: 978-3-031-30648-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation