Robust Pipeline for Detection of Adversarial Images

  • Conference paper
  • First Online:
Machine Intelligence Techniques for Data Analysis and Signal Processing

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 997))

  • 340 Accesses

Abstract

Deep neural networks (DNN’s) have achieved state-of-the-art (SOTA) for many image-related tasks like classification, object detection, etc. The authors have shown that DNN’s can be easily fooled by adversarial examples that can be generated by adding small distortions or perturbations to the images. Several attack methods to add these perturbations in images have been proposed like the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), etc. to create distortions in images that cause the DNN’s to misclassify instances. It is essential to come up with strategies to identify such fake images. Kee** this in mind, we propose an end-to-end pipeline capable of detecting adversarially attacked images, i.e., images with perturbations. For a given image, we generate images with manipulation masks using Mantra-Net and pass them through a trained binary classifier that distinguishes between perturbed and original images. Experiments on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 dataset have shown that our pipeline achieves an accuracy of 0.986 on the task of detecting adversarially crafted images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 299.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 379.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 379.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ouyang W, Zeng X, Wang X, Qiu S, Luo P, Tian Y, Li H, Yang S, Wang Z, Li H et al (2016) Deepid-net: object detection with deformable part-based convolutional neural networks. IEEE Trans Pattern Anal Mach Intell 39(7):1320–1334

    Article  Google Scholar 

  2. Diba A, Sharma V, Pazandeh A, Pirsiavash H, Van Gool L (2017) Weakly supervised cascaded convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 914–922

    Google Scholar 

  3. Mrabti W, Baibai K, Bellach B, Haj Thami RO, Tairi H (2019) Human motion tracking: a comparative study. Procedia Comput Sci 148:145–153. The second international conference on intelligent computing in data sciences (ICDS2018)

    Google Scholar 

  4. Kalfaoglu ME, Kalkan S, Alatan AA (2020) Late temporal modeling in 3D CNN architectures with BERT for action recognition. CoRR, vol. abs/2008.01232

    Google Scholar 

  5. Bulat A, Kossaifi J, Tzimiropoulos G, Pantic M (2020) Toward fast and accurate human pose estimation via soft-gated skip connections. CoRR, vol. abs/2002.11098

    Google Scholar 

  6. Tao A, Sapra K, Catanzaro B (2020) Hierarchical multi-scale attention for semantic segmentation. ar**v preprint ar**v:2005.10821

  7. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations

    Google Scholar 

  8. Evtimov I, Eykholt K, Fernandes E, Kohno T, Li B, Prakash A, Rahmati A, Song D (2017) Robust physical-world attacks on machine learning models. 2(3):4. ar**v preprint ar**v:1707.08945

  9. Liu Z, Qi Z, Torr PH (2020) Global texture enhancement for fake face detection in the wild. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), June

    Google Scholar 

  10. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), June

    Google Scholar 

  11. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. ar**v preprint ar**v:1412.6572

  12. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. ar**v preprint ar**v:1706.06083

  13. Buckman J, Roy A, Raffel C, Goodfellow I (2018) Thermometer encoding: one hot way to resist adversarial examples. 1(1):2–2

    Google Scholar 

  14. Brown TB, Mané D, Roy A, Abadi M, Gilmer J (2017) Adversarial patch. ar**v preprint ar**v:1712.09665

  15. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS P), pp 372–387

    Google Scholar 

  16. Sitawarin C, Bhagoji AN, Mosenia A, Chiang M, Mittal P (2018) Darts: deceiving autonomous cars with toxic signs. ar**v preprint ar**v:1802.06430

  17. Grosse K, Manoharan P, Papernot N, Backes M, McDaniel P (2017) On the (statistical) detection of adversarial examples. ar**v preprint ar**v:1702.06280

  18. Gong Z, Wang W, Ku W-S (2017) Adversarial and clean data are not twins. ar**v preprint ar**v:1704.04960

  19. Metzen JH, Genewein T, Fischer V, Bischoff B (2017) On detecting adversarial perturbations. ar**v preprint ar**v:1702.04267

  20. Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial machine learning at scale. CoRR, vol. abs/1611.01236

    Google Scholar 

  21. Xu W, Evans D, Qi Y (2017) Feature squeezing: detecting adversarial examples in deep neural networks. ar**v preprint ar**v:1704.01155

  22. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. ar**v preprint ar**v:1312.6199

  23. Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp 506–519

    Google Scholar 

  24. Kurakin A, Goodfellow I, Bengio S et al (2016) Adversarial examples in the physical world

    Google Scholar 

  25. Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP). IEEE, pp 582–597

    Google Scholar 

  26. Hossin M, Sulaiman MdN (2015) A review on evaluation metrics for data classification evaluations. Int J Data Mining Knowl Manage Process 5(2):1

    Google Scholar 

  27. Athalye A, Engstrom L, Ilyas A, Kwok K (2018) Synthesizing robust adversarial examples. In: Dy J, Krause A (eds) Proceedings of the 35th international conference on machine learning, vol. 80 of proceedings of machine learning research, PMLR, 10–15 July 2018, pp 284–293

    Google Scholar 

  28. Wu Y, AbdAlmageed W, Natarajan P (2019) ManTra-Net: manipulation tracing network for detection and localization of image forgeries with anomalous features. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), June 2019

    Google Scholar 

  29. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Natesh Reddy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vohra, A., Reddy, N., Dhawale, K., Jain, P. (2023). Robust Pipeline for Detection of Adversarial Images. In: Sisodia, D.S., Garg, L., Pachori, R.B., Tanveer, M. (eds) Machine Intelligence Techniques for Data Analysis and Signal Processing. Lecture Notes in Electrical Engineering, vol 997. Springer, Singapore. https://doi.org/10.1007/978-981-99-0085-5_32

Download citation

Publish with us

Policies and ethics

Navigation