Log in

Shared wasserstein adversarial domain adaption

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In numerous real-world applications, obtaining labeled data for a specific deep learning task can be prohibitively expensive. We present an innovative framework for unsupervised training of deep neural networks, drawing inspiration from the adversarial learning paradigm. Our approach incorporates the cycle-consistency constraint to effectively constrain the generator. Furthermore, we capitalize on the reconstructed samples, treating them as "real" samples for the discriminator during classification. This idea stems from the success of Wasserstein GAN, which leverages its gradient property and promising generalization bound during network training. Simultaneously, we employ a shared latent-data space constraint to ensure compatibility between the source domain and its corresponding target domain. This constraint facilitates effective knowledge transfer from the source to the target domain, even in the absence of labeled data for the target domain. To enhance the performance of the target domain classifier, we introduce association chains that link the embeddings of labeled samples to those of unlabeled samples and vice versa. By encouraging correct association cycles that ultimately return to the same class from which the association began, and penalizing wrong associations leading to a different class, we ensure accurate predictions. Our proposed method, named Shared Wasserstein Adversarial Domain Learning (SWADL), combines these novel constraints. Through extensive evaluations on benchmark datasets such as MNIST, SVHN, and USPS, we demonstrate that SWADL consistently outperforms current mainstream methods. It achieves superior results in unsupervised domain adaptation tasks, addressing the challenge of limited labeled data in real-world scenarios. The code and models are available at https://github.com/Jayee-chen/Adversarial-Domain-Adaptation.git.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availibility Statement

No data is generated during this Study.

References

  1. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng (TKDE) 22(10):1345–1359

    Article  Google Scholar 

  2. Gretton A (2012) A Kernel two-sample test. J Mach Learn Res 13:723–773

  3. Dziugaite GK, Roy DM, Ghahramani Z (2015) Training generative neural networks via Maximum Mean Discrepancy optimization. Uai

  4. Sun B, Feng J, Saenko K (2016) Returnof frustratingly easy domain adaptation. In: Thirtieth AAAI conference on artificial intelligence

  5. Sun B, Saenko K (2016) Deep CORAL: correlation alignment for deep domain adaptation. In: ICCV workshop on transferring and adapting source knowledge in computer vision (TASK-CV)

  6. Ghifary M, Kleijn WB, Zhang M, Balduzzi D, Li W (2016) Deep reconstruction-classification networks for unsupervised domain adaptation. In: European conference on computer vision (ECCV), pp 597–613

  7. Huang J, Smola AJ, Gretton A, Borgwardt KM , Scholkopf B (2006) Correcting sample selection bias by unlabeled data. In: NIPS

  8. Chu W-S, De la Torre F, Cohn JF (2013) Selective transfer machine for personalized facial action unit detection. CVPR

  9. Jhuo IH, Liu D, Lee DT, Chang SF (2012) Robust visual domain adaptation with low-rank reconstruction. CVPR

  10. Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. CVPR

  11. Qiu Q, Patel VM, Turaga P, Chellappa R (2012) Domain adaptive dictionary learning. ECCV

  12. Wang X, Shrivastava A, Gupta A (2017) A fast RCNN: hard positive generation via adversary for object detection. In: The IEEE conference on Computer Vision and Pattern Recognition (CVPR)

  13. Hoffman J, Tzeng E, Park T, Zhu J, Isola P, Saenko K, Efros A, Darrell T (2016) CyCADA: Cycle-Consistent Adversarial Domain Adaptation. In: Dy J, Krause A (eds) Proceedings of the 35th international conference on machine learning (proceedings of machine learning research), vol 80, 1994–2003

  14. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein gan. ar**v:1701.07875

  15. Haeusser P, Frerix T, Mordvintsev A, Cremers D (2018) Associative Domain Adaptation. ar**v:1708.00938

  16. Donahue J, Krähenbühl P, Darrell T (2016) Adversarial feature learning. ar**v:1605.09782

  17. Mirza M, Osindero S (2014) Conditional generative adversarial nets. ar**v:1411.1784

  18. Taigman Y, Polyak A, Wolf L (2016) Unsupervised cross domain image generation. ar**v:1611.02200

  19. Zhu J, Park T, Isola P, Efros AA (2017) Unpaired image to image translation using cycle consistent adversarial networks. In: International conference on computer vision (ICCV)

  20. Liu MY, Tuzel O (2016) Coupled generative adversarial networks. In: Advances in neural information processing systems, pp 469–477

  21. Liu M, Breuel T, Kautz J (2017) Unsupervised image to image translation networks. ar**v:1703.00848

  22. Madadi Y, Seydi V, Nasrollahi K, Hosseini R, Moeslund T (2020) Deep visual unsupervised domain adaptation for classification tasks: a survey. IET Image Proc 14(19):3283–3299

    Article  Google Scholar 

  23. Zonoozi MH, Seydi V (2022) A survey on adversarial domain adaptation. Neural Process Lett. https://doi.org/10.1007/s11063-022-10977-5

  24. LeCun Y (1998) Gradient-based learning applied to document recognition. In: Proceedings of the IEEE 86:22782324

  25. Hull JJ (1994) A database for handwritten text recognitionresearch. PAMI 2016:550–554

    Article  Google Scholar 

  26. Netzer Y, Fillet M, Coates A, Bissacco A, Wu B, Ng AY (2011) Reading digits in natural images with unsupervised feature learning. In: NIPS

  27. Saenko K, Kulis B, Fritz M, Darrell T (2010) Adaptingvisual category models to new domains. In: ECCV

  28. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet Classification with Deep Convolutional Neural Networks. Advances in neural information processing systems. https://doi.org/10.1145/3065386

  29. Deng J, Dong W, Socher R et al (2009) Imagenet: a large-scale hierarchical image database. Proc of IEEE Computer Vision & Pattern Recognition, 248–255

  30. Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: International conference on machine learning, PMLR, pp 2208–2217

  31. Long M, Cao Z, Wang J, Jordan MI (2018) Conditional adversarial domain adaptation. In: Advances in neural information processing systems, pp 1640–1650

  32. Pei Z, Cao Z, Long M, Wang J (2019) Multi-adversarial domain adaptation. ar**v:1809.02176

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuming Chen.

Ethics declarations

Conflicts of interest

On behalf of all authors, the corresponding author declares there is no conflict of Interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yao, S., Chen, Y., Zhang, Y. et al. Shared wasserstein adversarial domain adaption. Multimed Tools Appl (2024). https://doi.org/10.1007/s11042-024-18702-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11042-024-18702-1

Keywords

Navigation