AdverseGen: A Practical Tool for Generating Adversarial Examples to Deep Neural Networks Using Black-Box Approaches

  • Conference paper
  • First Online:
Artificial Intelligence XXXVIII (SGAI-AI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13101))

  • 743 Accesses

Abstract

Deep neural networks are fragile as they are easily fooled by inputs with deliberate perturbations, which are key concerns in image security issues. Given a trained neural network, we are always curious about whether the neural network has learned the concept that we’d like it to learn. We want to know whether there might be some vulnerabilities of the neural network that could be exploited by hackers. It could be useful if there is a tool that can be used by non-experts to test a trained neural network and try to find its vulnerabilities. In this paper, we introduce a tool named AdverseGen for generating adversarial examples to a trained deep neural network using the black-box approach, i.e., without using any information about the neural network architecture and its gradient information. Our tool provides customized adversarial attacks for both non-professional users and developers. It can be invoked by a graphical user interface or command line mode to launch adversarial attacks. Moreover, this tool supports different attack goals (targeted, non-targeted) and different distance metrics.

This work was supported by the Research Institute of Trustworthy Autonomous Systems, the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (Grant No. 2017ZT07X386) and Shenzhen Science and Technology Program (Grant No. KQTD2016112514355531).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 64.19
Price includes VAT (France)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 79.11
Price includes VAT (France)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://eagerpy.jonasrauber.de/.

  2. 2.

    https://github.com/Pikayue11/AdverseGen/.

  3. 3.

    https://github.com/PySimpleGUI/PySimpleGUI.

  4. 4.

    http://pytorch.org.

  5. 5.

    https://github.com/fra31/sparse-imperceivable-attacks.

  6. 6.

    https://github.com/qilong-zhang/Patch-wise-iterative-attack.

  7. 7.

    https://github.com/dgragnaniello/PQP.

  8. 8.

    https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/.

References

  1. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: International Conference on Learning Representations, pp. 1–12 (2018)

    Google Scholar 

  2. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26 (2017)

    Google Scholar 

  3. Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4724–4732 (2019)

    Google Scholar 

  4. Ding, G.W., Wang, L., **, X.: Advertorch v0. 1: an adversarial robustness toolbox based on pytorch. ar**v preprint ar**v:1902.07623 (2019)

  5. Gao, L., Zhang, Q., Song, J., Liu, X., Shen, H.T.: Patch-wise attack for fooling deep neural network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12373, pp. 307–322. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58604-1_19

    Chapter  Google Scholar 

  6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  7. Gragnaniello, D., Marra, F., Verdoliva, L., Poggi, G.: Perceptual quality-preserving black-box attack against deep learning image classifiers. Pattern Recogn. Lett. 147, 142–149 (2021)

    Article  Google Scholar 

  8. Guo, C., Frank, J.S., Weinberge, K.Q.: Low frequency adversarial perturbation. In: Adams, R.P., Gogate, V. (eds.) Proceedings of The 35th Uncertainty in Artificial Intelligence Conference. Proceedings of Machine Learning Research, vol. 115, pp. 1127–1137. PMLR, 22–25 Jul 2020

    Google Scholar 

  9. Guo, C., Gardner, J., You, Y., Wilson, A.G., Weinberger, K.: Simple black-box adversarial attacks. In: International Conference on Machine Learning, pp. 2484–2493. PMLR (2019)

    Google Scholar 

  10. Hayes, J., Danezis, G.: Learning universal adversarial perturbations with generative models. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 43–49. IEEE (2018)

    Google Scholar 

  11. Ilyas, A., Engstrom, L., Madry, A.: Prior convictions: black-box adversarial attacks with bandits and priors. In: International Conference on Learning Representations (ICLR), pp. 1–25 (2018)

    Google Scholar 

  12. Kim, H.: Torchattacks: a pytorch repository for adversarial attacks. ar**v preprint ar**v:2010.01950 (2020)

  13. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: 5th International Conference on Learning Representations (ICLR) (2017)

    Google Scholar 

  14. Li, Y., **, W., Xu, H., Tang, J.: Deeprobust: a platform for adversarial attacks and defenses. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35(18), pp. 16078–16080 (2021)

    Google Scholar 

  15. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: Proceedings of 5th International Conference on Learning Representations (ICLR), pp. 1–14 (2017)

    Google Scholar 

  16. Papernot, N., et al.: Technical report on the cleverhans v2.1.0 adversarial examples library. ar**v preprint ar**v:1610.00768 (2018)

  17. Rauber, J., Brendel, W., Bethge, M.: Foolbox: a python toolbox to benchmark the robustness of machine learning models. ar**v preprint ar**v:1707.04131 (2017)

  18. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019). https://doi.org/10.1109/TEVC.2019.2890858

    Article  Google Scholar 

  19. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR), pp. 1–10 (2014)

    Google Scholar 

  20. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  21. Yang, J., Jiang, Y., Huang, X., Ni, B., Zhao, C.: Learning black-box attackers with transferable priors and query feedback. In: Advances in Neural Information Processing Systems, vol. 33, pp. 12288–12299 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to **n Yao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, K., Wu, K., Chen, S., Zhao, Y., Yao, X. (2021). AdverseGen: A Practical Tool for Generating Adversarial Examples to Deep Neural Networks Using Black-Box Approaches. In: Bramer, M., Ellis, R. (eds) Artificial Intelligence XXXVIII. SGAI-AI 2021. Lecture Notes in Computer Science(), vol 13101. Springer, Cham. https://doi.org/10.1007/978-3-030-91100-3_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91100-3_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91099-0

  • Online ISBN: 978-3-030-91100-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation