Transfer Learning: Kernel-Based Domain Adaptation with Distance-Based Penalization

  • Conference paper
  • First Online:
Pattern Recognition and Machine Intelligence (PReMI 2023)

Abstract

This paper introduces a novel approach to address the challenges of transfer learning, which aims to efficiently train a classifier for a new domain using supervised information from similar domains. Traditional transfer learning methods may fail to maintain the discriminative features of the target domain due to the scarcity of labelled data and the use of irrelevant source domain data distribution subspace, resulting in poor metrics. To overcome these challenges, the proposed approach, called KDADP, transforms the data distribution of both the source and target domains into a lower-dimensional subspace while preserving their discriminatory information. The KDADP model maximizes between-class variance and minimizes within-class variance with L1 penalization, enabling the recovery of the most useful characteristics and reducing the model’s complexity. Experimental results on three real-world domain adaptation datasets demonstrate that the proposed KDADP model significantly improves classification performance and outperforms state-of-the-art primitive, shallow, and deeper domain adaptation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 88.80
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 113.41
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Cai, H., Chen, H., Song, Y., Zhang, C., Zhao, X., Yin, D.: Data manipulation: towards effective instance learning for neural dialogue generation via learning to augment and reweight. ar**v preprint ar**v:2004.02594 (2020)

  2. Li, M., Zhai, Y.M., Luo, Y.W., Ge, P.F., Ren, C.X.: Enhanced transport distance for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13936–13944 (2020)

    Google Scholar 

  3. Lin, T., Zha, H.: Riemannian manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 30(5), 796–809 (2008)

    Article  Google Scholar 

  4. Liu, F., et al.: Probabilistic margins for instance reweighting in adversarial training. In: Advances in Neural Information Processing Systems, vol. 34, pp. 23258–23269 (2021)

    Google Scholar 

  5. Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2200–2207 (2013)

    Google Scholar 

  6. Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer joint matching for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1410–1417 (2014)

    Google Scholar 

  7. Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2010)

    Article  Google Scholar 

  8. Wang, W., Li, H., Ding, Z., Wang, Z.: Rethink maximum mean discrepancy for domain adaptation. ar**v preprint ar**v:2007.00689 (2020)

  9. Yang, L., Zhong, P.: Discriminative and informative joint distribution adaptation for unsupervised domain adaptation. Knowl.-Based Syst. 207, 106394 (2020)

    Article  Google Scholar 

  10. Zhang, J., Li, W., Ogunbona, P.: Joint geometrical and statistical alignment for visual domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1859–1867 (2017)

    Google Scholar 

  11. Zhang, Z., Wang, J., Zha, H.: Adaptive manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 34(2), 253–265 (2011)

    Article  Google Scholar 

  12. Zhu, Y., et al.: Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 32(4), 1713–1722 (2020)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jainendra Prakash .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Prakash, J., Ghorai, M., Sanodiya, R. (2023). Transfer Learning: Kernel-Based Domain Adaptation with Distance-Based Penalization. In: Maji, P., Huang, T., Pal, N.R., Chaudhury, S., De, R.K. (eds) Pattern Recognition and Machine Intelligence. PReMI 2023. Lecture Notes in Computer Science, vol 14301. Springer, Cham. https://doi.org/10.1007/978-3-031-45170-6_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45170-6_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45169-0

  • Online ISBN: 978-3-031-45170-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation