Abstract
The problems of vulnerability of the computer systems of biomedical images classification to adversarial attacks on are investigated. The aim of the work is to study the effectiveness of the impact of various models of adversarial attacks on biomedical images and the values of control parameters of algorithms for generating their attacking versions. The effectiveness of attacks prepared using the projected gradient descent algorithm (PGD), Deep Fool (DF) algorithm and Carlini-Wagner algorithm (CW) is investigated. Experimental studies were carried out on the example of solving typical problems of medical images classification using deep neural networks VGG16, EfficientNetB2, DenseNet121, Xception, ResNet50 as well as data containing chest X-rays images and brain MRI-scan images.
Our findings in this work are as follows. Deep models were very susceptible to adversarial attacks, which led to decrease of the accuracy classification of the models for all datasets. Prior to the use of adversarial methods, we achieved a classification accuracy of 93.6% for brain MRI and 99.1% for chest X-rays. During the DF attack the accuracy of the VGG16 model showed a maximum absolute decrease of 49.8% for MRI-scans, and 57.3% for chest X-rays images. The gradient descent (PGD) algorithm with the same values of malicious image disturbances is less effective than the DF and the CW adversarial attacks. VGG16 deep model is more effective in accuracy classification on considered datasets and most vulnerable to adversarial attacks among other deep models. We hope that these results would be useful to design more robust and secure medical deep learning systems.
Y. Blinkov—This paper has been supported by the RUDN University Strategic Academic Leadership Program.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Asgari Taghanaki, S., Das, A., Hamarneh, G.: Vulnerability analysis of chest X-ray image classification against adversarial attacks. In: Stoyanov, D., et al. (eds.) MLCN/DLF/IMIMIC 2018. LNCS, vol. 11038, pp. 87–94. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-02628-8_10
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/SP.2017.49
Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey (2018). https://doi.org/10.48550/arxiv.1810.00069
Fawzi, A., Fawzi, O., Frossard, P.: Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 107(3), 481–508 (2017). https://doi.org/10.1007/s10994-017-5663-3
Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287–1289 (2019). https://doi.org/10.1126/science.aaw4399. https://www.science.org/doi/abs/10.1126/science.aaw4399
Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/e2c420d928d4bf8ce0ff2ec19b371514-Paper.pdf
Jain, G., Mittal, D., Thakur, D., Mittal, M.K.: A deep learning approach to detect Covid-19 coronavirus with X-ray images. Biocybern. Biomed. Eng. 40(4), 1391–1405 (2020). https://doi.org/10.1016/j.bbe.2020.08.008. https://www.sciencedirect.com/science/article/pii/S0208521620301005
Kermany, D.S., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131.e9 (2018). https://doi.org/10.1016/j.cell.2018.02.010. https://www.sciencedirect.com/science/article/pii/S0092867418301545
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017). https://doi.org/10.1016/j.media.2017.07.005. https://www.sciencedirect.com/science/article/pii/S1361841517301135
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017). https://doi.org/10.48550/ARXIV.1706.06083
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016). https://doi.org/10.1109/CVPR.2016.282
Ozdag, M.: Adversarial attacks and defenses against deep neural networks: a survey. Procedia Comput. Sci. 140, 152–161 (2018). https://doi.org/10.1016/j.procs.2018.10.315
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS 2017, pp. 506–519. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3052973.3053009
Papernot, N., McDaniel, P.D., Goodfellow, I.J.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. CoRR abs/1605.07277 (2016). http://arxiv.org/abs/1605.07277
Shchetinin, E.Y., Sevastianov, L.A.: On transfer learning methods in biomedical image classification tasks. Informatika i ee Primeneniya 15(4), 59–64 (2021). https://doi.org/10.14357/19922264210408
Shchetinin, E.Y., Sevastianov, L.A.: Automated detection of Covid-19 coronavirus infection based on analysis of chest X-ray images by deep learning methods. Tomsk State Univ. J. Control Comput. Sci. 58, 98–105 (2022). https://doi.org/10.17223/19988605/58/9
Shi, Y., Wang, S., Han, Y.: Curls & Whey: boosting black-box adversarial attacks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6512–6520 (2019). https://doi.org/10.1109/CVPR.2019.00668
Siar, M., Teshnehlab, M.: Brain tumor detection using deep neural network and machine learning algorithm. In: 2019 9th International Conference on Computer and Knowledge Engineering (ICCKE), pp. 363–368 (2019). https://doi.org/10.1109/ICCKE48569.2019.8964846
Wang, H., Yu, C.: A direct approach to robust deep learning using adversarial networks. CoRR abs/1905.09591 (2019). http://arxiv.org/abs/1905.09591
Wang, J., Yin, Z., Tang, J., Jiang, J., Luo, B.: PICA: a pixel correlation-based attentional black-box adversarial attack. CoRR abs/2101.07538 (2021). https://arxiv.org/abs/2101.07538
Yang, J., Jiang, Y., Huang, X., Ni, B., Zhao, C.: Learning black-box attackers with transferable priors and query feedback. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS 2020, vol. 33, pp. 12288–12299. Curran Associates Inc., Red Hook (2020)
Yuan, J., Zhou, S., Lin, L., Wang, F., Cui, J.: Black-box adversarial attacks against deep learning based malware binaries detection with GAN. In: Giacomo, G.D., et al. (eds.) 24th European Conference on Artificial Intelligence, ECAI 2020, Santiago de Compostela, Spain, 29 August–8 September 2020, Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020). Frontiers in Artificial Intelligence and Applications, vol. 325, pp. 2536–2542. IOS Press (2020). https://doi.org/10.3233/FAIA200388
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Shchetinin, E.Y., Glushkova, A.G., Blinkov, Y.A. (2023). On Effectiveness of the Adversarial Attacks on the Computer Systems of Biomedical Images Classification. In: Vishnevskiy, V.M., Samouylov, K.E., Kozyrev, D.V. (eds) Distributed Computer and Communication Networks. DCCN 2022. Communications in Computer and Information Science, vol 1748. Springer, Cham. https://doi.org/10.1007/978-3-031-30648-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-30648-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30647-1
Online ISBN: 978-3-031-30648-8
eBook Packages: Computer ScienceComputer Science (R0)