Log in

Manipulable, reversible and diversified de-identification via face identity disentanglement

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Face de-identification has always been a focal point in privacy-preserving research. Most existing de-identification methods focus only on the anonymization phase, neglecting the importance of deanonymization. Moreover, existing reversible de-identification methods are unsatisfactory in terms of diversity and manipulability. To overcome these limitations, we propose MRDD-FID, short for Manipulable, Reversible and Diversified De-identification via Face Identity Disentanglement. The framework realizes individual modification of identity representations while kee** non-identity representations unchanged through face identity disentanglement. Randomized passwords are used for identity modification, thus ensuring complete randomness in the modification process. By utilizing Generative Adversarial Networks (GANs) for training, we effectively enhance the realism and diversity of de-identification. Furthermore, MRDD-FID can precisely control the degree and direction of de-identification based on user-specified strengths and styles without compromising the image quality. Compared to existing methods, MRDD-FID offers higher flexibility and security. Extensive experiments demonstrate the effectiveness and superiority of our method in terms of anonymity, diversity, reversibility and manipulability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability statement

Publicly available dataset was used in this study. The FFHQ and CelebA-HQ datasets can be found here: https://github.com/NVlabs/ffhq-dataset and https://github.com/tkarras/progressive_growing_of_gans.

Notes

  1. https://github.com/WuJie1010/Facial-Expression-Recognition.Pytorch

References

  1. Hill K. The secretive company that might end privacy as we know it. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

  2. Ribaric S, Pavesic N (2015) An overview of face de-identification in still images and videos. Proceedings of the IEEE International Conference and Workshops on Automatic Face and Gesture Recognition 4:1–6

    Google Scholar 

  3. Hill S, Zhou Z, Saul LK, Shacham H (2016) (2016) On the (in)effectiveness of mosaicing and blurring as tools for document redaction. Proceedings on Privacy Enhancing Technologies 4:403–417

    Article  Google Scholar 

  4. McPherson R, Shokri R, Shmatikov V (2016) Defeating image obfuscation with deep learning ar**v:1609.00408

  5. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst 27

  6. Kim H, Pang Z, Zhao L, Su X, Lee JS (2023) Semantic-aware deidentification generative adversarial networks for identity anonymization. Multimed Tools Appl 82(10):15535–15551

    Article  Google Scholar 

  7. Kim H, Zhao L, Pang Z, Su X, Lee JS (2023) Joint reconstruction and deidentification for mobile identity anonymization. Multimed Tools Appl 1–16

  8. Kuang Z, Liu H, Yu J, Tian A, Wang L, Fan J, Babaguchi N (2021) Effective de-identification generative adversarial network for face anonymization. In: Proceedings of the ACM international conference on multimedia, pp 3182–3191

  9. Hukkelås H, Mester R, Lindseth F (2019) Deepprivacy: a generative adversarial network for face anonymization. In: International symposium on visual computing, pp 565–578

  10. Maximov M, Elezi I, Leal-Taixé L (2020) Ciagan: conditional identity anonymization generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5447–5456

  11. Gafni O, Wolf L, Taigman Y (2019) Live face de-identification in video. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9378–9387

  12. Wu Y, Yang F, Ling H (2018) Privacy-protective-gan for face de-identification. ar**v:1806.08906

  13. Gu X, Luo W, Ryoo MS, Lee YJ (2020) Password-conditioned anonymization and deanonymization with face identity transformers. In: Proceedings of the european conference on computer vision, pp 727–743

  14. Cao J, Liu B, Wen Y, **e R, Song L (2021) Personalized and invertible face de-identification by disentangled identity information manipulation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3334–3342

  15. Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T (2020) Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8110–8119

  16. Gross R, Airoldi E, Malin B, Sweeney L (2006) Integrating utility into face de-identification. In: Proceedings of the international workshop on privacy enhancing technologies, pp 227–242

  17. Gross R, Sweeney L, Torre F, Baker S (2006) Model-based face de-identification. In: Proceedings of the conference on computer vision and pattern recognition workshop, pp 161

  18. Newton EM, Sweeney L, Malin B (2005) Preserving privacy by de-identifying face images. IEEE Trans Knowl Data Eng 17(2):232–243

    Article  Google Scholar 

  19. Jourabloo A, Yin X, Liu X (2015) Attribute preserved face de-identification. In: Proceedings of the international conference on biometrics, pp 278–285

  20. Sweeney L (2002) k-anonymity: a model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-based Systems 10(05):557–570

  21. Luo Y, Zhu J, He K, Chu W, Tai Y, Wang C, Yan J (2022) Styleface: towards identity-disentangled face generation on megapixels. In: Proceedings of the European conference on computer vision, pp 297–312

  22. Vu T, Do K, Nguyen K, Than K (2022) Face swap** as a simple arithmetic operation. ar**v:2211.10812

  23. Li D, Wang W, Zhao K, Dong J, Tan T (2023) RiDDLE: reversible and diversified de-identification with latent encryptor. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8093–8102

  24. Gong M, Liu J, Li H, **e Y, Tang Z (2020) Disentangled representation learning for multiple attributes preserving face deidentification. IEEE Transactions on Neural Networks and Learning Systems 33(1):244–256

    Article  Google Scholar 

  25. Nitzan Y, Bermano A, Li Y, Cohen-Or D (2020) Face identity disentanglement via latent space map**. ACM Trans Graph 39(6):1–14

    Article  Google Scholar 

  26. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826

  27. Bao J, Chen D, Wen F, Li H, Hua G (2018) Towards open-set identity preserving face synthesis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6713–6722

  28. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ar** via latent semantics disentanglement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7642–7651

  29. Deng J, Guo J, Xue N, Zafeiriou S (2019) Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4690–4699

  30. Park T, Zhu J-Y, Wang O, Lu J, Shechtman E, Efros A, Zhang R (2020) Swap** autoencoder for deep image manipulation. Adv Neural Inf Process Syst 33:7198–7211

    Google Scholar 

  31. Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595

  32. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

  33. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Transactions on Image Processing. 20(8):2378–2386

  34. Karras T, Aila T, Laine S, Lehtinen J (2018) Progressive growing of GANs for improved quality, stability, and variation. In: Proceedings of the international conference on learning representations

  35. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. ar**v:1412.6980

  36. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823

  37. Cao Q, Shen L, **e W, Parkhi OM, Zisserman A (2018) Vggface2: a dataset for recognising faces across pose and age. In: Proceedings of the IEEE international conference on automatic face & gesture recognition, pp 67–74

  38. Yi D, Lei Z, Liao S, Li SZ (2014) Learning face representation from scratch. ar**v:1411.7923

  39. Maaten L, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(11)

  40. Zhang K, Zhang Z, Li Z, Qiao Y (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23(10):1499–1503

    Article  ADS  Google Scholar 

Download references

Acknowledgements

The work was supported by the National Key R &D Program of China(Grant No. 2020YFB1805400), the National Natural Science Foundation of China (Grant No. 62072063) and the Project Supported by Graduate Student Research and Innovation Foundation of Chongqing, China (Grant No. CYB22063, CYB23045).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Di **ao.

Ethics declarations

Conflicts of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

**ao, D., **a, J., Li, M. et al. Manipulable, reversible and diversified de-identification via face identity disentanglement. Multimed Tools Appl (2024). https://doi.org/10.1007/s11042-024-18538-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11042-024-18538-9

Keywords

Navigation