Abstract
Deep neural networks (DNN’s) have achieved state-of-the-art (SOTA) for many image-related tasks like classification, object detection, etc. The authors have shown that DNN’s can be easily fooled by adversarial examples that can be generated by adding small distortions or perturbations to the images. Several attack methods to add these perturbations in images have been proposed like the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), etc. to create distortions in images that cause the DNN’s to misclassify instances. It is essential to come up with strategies to identify such fake images. Kee** this in mind, we propose an end-to-end pipeline capable of detecting adversarially attacked images, i.e., images with perturbations. For a given image, we generate images with manipulation masks using Mantra-Net and pass them through a trained binary classifier that distinguishes between perturbed and original images. Experiments on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 dataset have shown that our pipeline achieves an accuracy of 0.986 on the task of detecting adversarially crafted images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ouyang W, Zeng X, Wang X, Qiu S, Luo P, Tian Y, Li H, Yang S, Wang Z, Li H et al (2016) Deepid-net: object detection with deformable part-based convolutional neural networks. IEEE Trans Pattern Anal Mach Intell 39(7):1320–1334
Diba A, Sharma V, Pazandeh A, Pirsiavash H, Van Gool L (2017) Weakly supervised cascaded convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 914–922
Mrabti W, Baibai K, Bellach B, Haj Thami RO, Tairi H (2019) Human motion tracking: a comparative study. Procedia Comput Sci 148:145–153. The second international conference on intelligent computing in data sciences (ICDS2018)
Kalfaoglu ME, Kalkan S, Alatan AA (2020) Late temporal modeling in 3D CNN architectures with BERT for action recognition. CoRR, vol. abs/2008.01232
Bulat A, Kossaifi J, Tzimiropoulos G, Pantic M (2020) Toward fast and accurate human pose estimation via soft-gated skip connections. CoRR, vol. abs/2002.11098
Tao A, Sapra K, Catanzaro B (2020) Hierarchical multi-scale attention for semantic segmentation. ar**v preprint ar**v:2005.10821
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations
Evtimov I, Eykholt K, Fernandes E, Kohno T, Li B, Prakash A, Rahmati A, Song D (2017) Robust physical-world attacks on machine learning models. 2(3):4. ar**v preprint ar**v:1707.08945
Liu Z, Qi Z, Torr PH (2020) Global texture enhancement for fake face detection in the wild. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), June
Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), June
Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. ar**v preprint ar**v:1412.6572
Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. ar**v preprint ar**v:1706.06083
Buckman J, Roy A, Raffel C, Goodfellow I (2018) Thermometer encoding: one hot way to resist adversarial examples. 1(1):2–2
Brown TB, Mané D, Roy A, Abadi M, Gilmer J (2017) Adversarial patch. ar**v preprint ar**v:1712.09665
Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS P), pp 372–387
Sitawarin C, Bhagoji AN, Mosenia A, Chiang M, Mittal P (2018) Darts: deceiving autonomous cars with toxic signs. ar**v preprint ar**v:1802.06430
Grosse K, Manoharan P, Papernot N, Backes M, McDaniel P (2017) On the (statistical) detection of adversarial examples. ar**v preprint ar**v:1702.06280
Gong Z, Wang W, Ku W-S (2017) Adversarial and clean data are not twins. ar**v preprint ar**v:1704.04960
Metzen JH, Genewein T, Fischer V, Bischoff B (2017) On detecting adversarial perturbations. ar**v preprint ar**v:1702.04267
Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial machine learning at scale. CoRR, vol. abs/1611.01236
Xu W, Evans D, Qi Y (2017) Feature squeezing: detecting adversarial examples in deep neural networks. ar**v preprint ar**v:1704.01155
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. ar**v preprint ar**v:1312.6199
Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp 506–519
Kurakin A, Goodfellow I, Bengio S et al (2016) Adversarial examples in the physical world
Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP). IEEE, pp 582–597
Hossin M, Sulaiman MdN (2015) A review on evaluation metrics for data classification evaluations. Int J Data Mining Knowl Manage Process 5(2):1
Athalye A, Engstrom L, Ilyas A, Kwok K (2018) Synthesizing robust adversarial examples. In: Dy J, Krause A (eds) Proceedings of the 35th international conference on machine learning, vol. 80 of proceedings of machine learning research, PMLR, 10–15 July 2018, pp 284–293
Wu Y, AbdAlmageed W, Natarajan P (2019) ManTra-Net: manipulation tracing network for detection and localization of image forgeries with anomalous features. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), June 2019
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Vohra, A., Reddy, N., Dhawale, K., Jain, P. (2023). Robust Pipeline for Detection of Adversarial Images. In: Sisodia, D.S., Garg, L., Pachori, R.B., Tanveer, M. (eds) Machine Intelligence Techniques for Data Analysis and Signal Processing. Lecture Notes in Electrical Engineering, vol 997. Springer, Singapore. https://doi.org/10.1007/978-981-99-0085-5_32
Download citation
DOI: https://doi.org/10.1007/978-981-99-0085-5_32
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-0084-8
Online ISBN: 978-981-99-0085-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)