Fragile Neural Network Watermarking with Trigger Image Set

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12815))

  • 2779 Accesses

Abstract

Recent studies show that deep neural networks are vulnerable to data poisoning and backdoor attacks, both of which involve malicious fine tuning of deep models. In this paper, we first propose a black-box based fragile neural network watermarking method for the detection of malicious fine tuning. The watermarking process can be divided into three steps. Firstly, a set of trigger images is constructed based on a user-specific secret key. Then, a well trained DNN model is fine-tuned to classify the normal images in training set and trigger images in trigger set simultaneously in a two-stage alternate training manner. Fragile watermark is embedded by this means while kee** model’s original classification ability. The watermarked model is sensitive to malicious fine tuning and will produce unstable classification results of the trigger images. At last, the integrity of the network model can be verified by analyzing the output of watermarked model with the trigger image set as input. The experiments on three benchmark datasets demonstrate that our proposed watermarking method is effective in detecting malicious fine tuning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Deng, J., Berg, A., Satheesh, S., et al.: Imagenet large scale visual recognition competition 2012. See net. org/challenges/LSVRC, p. 41 (2012)

    Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: 3rd International Conference on Learning Representations (2014)

    Google Scholar 

  3. Gai, K., Qiu, M.: Reinforcement learning-based content-centric services in mobile sensing. IEEE Netw. 32(4), 34–39 (2018)

    Article  Google Scholar 

  4. Gai, K., Qiu, M.: Optimal resource allocation using reinforcement learning for IoT content-centric services. Appl. Soft Comput. 70, 12–21 (2018)

    Article  Google Scholar 

  5. Yang, Q., Liu, Y., Chen, T., et al.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)

    Article  Google Scholar 

  6. Dai, W., Qiu, M., Qiu, L., et al.: Who moved my data? privacy protection in smartphones. IEEE Commun. Mag. 55(1), 20–25 (2017)

    Article  Google Scholar 

  7. Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. ar**v preprint ar**v:1312.6199 (2013)

  8. Gu, T., Dolan-Gavitt, B., Garg, S.: Badnets: Identifying vulnerabilities in the machine learning model supply chain. ar**v preprint ar**v:1708.06733 (2017)

  9. Uchida, Y., Nagai, Y., Sakazawa, S., et al.: Embedding watermarks into deep neural networks. In: Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pp. 269–277 (2017)

    Google Scholar 

  10. Wang, T., Kerschbaum, F.: Attacks on digital watermarks for deep neural networks. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2622–2626 (2019)

    Google Scholar 

  11. Fan, L., Ng, K., Chan, C.: Rethinking deep neural network ownership verification: embedding passports to defeat ambiguity attacks. In: Advances in Neural Information Processing Systems, pp. 4714–4723 (2019)

    Google Scholar 

  12. Feng, L., Zhang, X.: Watermarking neural network with compensation mechanism. In: Li, G., Shen, H.T., Yuan, Y., Wang, X., Liu, H., Zhao, X. (eds.) KSEM 2020, Part II. LNCS (LNAI), vol. 12275, pp. 363–375. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-55393-7_33

    Chapter  Google Scholar 

  13. Guan, X., Feng, H., Zhang, W., et al.: Reversible watermarking in deep convolutional neural networks for integrity authentication. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2273–2280 (2020)

    Google Scholar 

  14. Adi, Y., Baum, C., Cisse, M., et al.: Turning your weakness into a strength: watermarking deep neural networks by backdooring. In: 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18), pp. 1615–1631 (2018)

    Google Scholar 

  15. Zhang, J., Gu, Z., Jang, J., et al.: Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the 2018 on Asia Conference on Computer and Communications Security, pp. 159–172 (2018)

    Google Scholar 

  16. Guo, J., Potkonjak, M.: Watermarking deep neural networks for embedded systems. In: 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–8 (2018)

    Google Scholar 

  17. Le Merrer, E., Pérez, P., Trédan, G.: Adversarial frontier stitching for remote neural network watermarking. Neural Comput. Appl. 32(13), 9233–9244 (2019). https://doi.org/10.1007/s00521-019-04434-z

    Article  Google Scholar 

  18. Zhu, R., Zhang, X., Shi, M., Tang, Z.: Secure neural network watermarking protocol against forging attack. EURASIP J. Image Video Process. 2020(1), 1–12 (2020). https://doi.org/10.1186/s13640-020-00527-1

    Article  Google Scholar 

  19. Zhang, X., Wang, S.: Fragile watermarking with error-free restoration capability. IEEE Trans. Multimed. 10(8), 1490–1499 (2008)

    Article  Google Scholar 

  20. Turner, A.l., Tsipras, D., Madry, A.: Label-consistent backdoor attacks. ar**v preprint ar**v:1912.02771 (2019)

  21. Wang, B., Yao, Y., Shan, S., et al.: Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 707–723 (2019)

    Google Scholar 

  22. Chen, H., Fu, C., Zhao, J., et al.: Deepinspect: a black-box trojan detection and mitigation framework for deep neural networks. In: International Joint Conferences on Artificial Intelligence Organization, pp. 4658–4664 (2019)

    Google Scholar 

  23. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grants U1936214, U20A20178, 62072114, U20B2051, 61872003.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to **npeng Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, R., Wei, P., Li, S., Yin, Z., Zhang, X., Qian, Z. (2021). Fragile Neural Network Watermarking with Trigger Image Set. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, SY. (eds) Knowledge Science, Engineering and Management. KSEM 2021. Lecture Notes in Computer Science(), vol 12815. Springer, Cham. https://doi.org/10.1007/978-3-030-82136-4_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82136-4_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82135-7

  • Online ISBN: 978-3-030-82136-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation