Log in

Discriminative Noise Robust Sparse Orthogonal Label Regression-Based Domain Adaptation

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Domain adaptation (DA) aims to enable a learning model trained from a source domain to generalize well on a target domain, despite the mismatch of data distributions between the two domains. State-of-the-art DA methods have so far focused on the search of a latent shared feature space where source and target domain data can be aligned either statistically and/or geometrically. In this paper, we propose a novel unsupervised DA method, namely Discriminative Noise Robust Sparse Orthogonal Label Regression-based Domain Adaptation (DOLL-DA). The proposed DOLL-DA derives from a novel integrated model which searches a shared feature subspace where data labels are orthogonally regressed using a label embedding trick, and source and target domain data are discriminatively aligned statistically through optimization of some repulse force terms. Furthermore, in minimizing a novel Noise Robust Sparse Orthogonal Label Regression(NRS_OLR) term, the proposed model explicitly accounts for data outliers to avoid negative transfer and introduces the property of sparsity when regressing data labels. We carry out comprehensive experiments in comparison with 35 state of the art DA methods using 8 standard DA benchmarks and 49 cross-domain image classification tasks. The proposed DA method demonstrates its effectiveness and consistently outperforms the state-of-the-art DA methods with a margin which reaches 17 points on the CMU PIE dataset. To gain insight into the proposed DOLL-DA, we also derive three additional DA methods based on three partial models from the full model, namely OLR, CDDA+, and JOLR-DA, highlighting the added value of (1) discriminative statistical data alignment; (2) Noise Robust Sparse Orthogonal Label Regression; and (3) their joint optimization through the full DA model. In addition, we also perform time complexity and an in-depth empiric analysis of the proposed DA method in terms of its sensitivity w.r.t. hyper-parameters, convergence speed, impact of the base classifier and random label initialization as well as performance stability w.r.t. target domain data being used in training.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Algorithm 1
Algorithm 2
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

References

  • Bai, Z., Wang, Z., Wang, J., Hu, D., & Ding, E. (2021). Unsupervised multi-source domain adaptation for person re-identification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 12914–12923).

  • Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6), 1373–1396.

    Article  Google Scholar 

  • Ben-David, S., Blitzer, J., Crammer, K., & Pereira, F. (2007). Analysis of representations for domain adaptation. In Advances in neural information processing systems (pp. 137–144).

  • Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. Machine Learning, 79(1), 151–175.

    Article  MathSciNet  Google Scholar 

  • Borgwardt, K. M., Gretton, A., Rasch, M. J., Kriegel, H.-P., Schölkopf, B., & Smola, A. J. (2006). Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14), 49–57.

    Article  Google Scholar 

  • Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., & Erhan, D. (2016). Domain separation networks. In Advances in neural information processing systems (pp. 343–351).

  • Busto, P.P., & Gall, J. (2017). Open set domain adaptation. In The IEEE international conference on computer vision (ICCV).

  • Chen, M., Xu, Z.E., Weinberger, K.Q., & Sha, F. (2012). Marginalized denoising autoencoders for domain adaptation. CoRR ar**v:1206.4683.

  • Courty, N., Flamary, R., Habrard, A., & Rakotomamonjy, A. (2017). Joint distribution optimal transportation for domain adaptation. In Advances in neural information processing systems (pp. 3733–3742).

  • Courty, N., Flamary, R., Tuia, D., & Rakotomamonjy, A. (2017). Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(9), 1853–1865.

    Article  Google Scholar 

  • Ding, Z., & Fu, Y. (2017). Robust transfer metric learning for image classification. IEEE Transactions Image Processing, 26(2), 660–670. https://doi.org/10.1109/TIP.2016.2631887

    Article  MathSciNet  Google Scholar 

  • Ding, Z., & Fu, Y. (2018). Robust multiview data analysis through collective low-rank subspace. IEEE Transactions on Neural Networks and Learning Systems, 29(5), 1986–1997. https://doi.org/10.1109/TNNLS.2017.2690970

    Article  MathSciNet  Google Scholar 

  • Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., & Darrell, T. (2014). Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Bei**g, China, 21–26 June 2014 (pp. 647–655). http://jmlr.org/proceedings/papers/v32/donahue14.html

  • Dong, J., Cong, Y., Sun, G., Fang, Z., & Ding, Z. (2021). Where and how to transfer: knowledge aggregation-induced transferability perception for unsupervised domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Fernando, B., Habrard, A., Sebban, M., & Tuytelaars, T. (2013). Unsupervised visual domain adaptation using subspace alignment. In IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013 (pp. 2960–2967). https://doi.org/10.1109/ICCV.2013.368

  • Fiori, S. (2005). Formulation and integration of learning differential equations on the Stiefel manifold. IEEE Transactions on Neural Networks, 16(6), 1697–1701.

    Article  Google Scholar 

  • Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., & Lempitsky, V. (2016). Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1), 2096–2030.

    MathSciNet  Google Scholar 

  • Ghifary, M., Balduzzi, D., Kleijn, W. B., & Zhang, M. (2017). Scatter component analysis: A unified framework for domain adaptation and domain generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(7), 1414–1430. https://doi.org/10.1109/TPAMI.2016.2599532

    Article  Google Scholar 

  • Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2066–2073). IEEE.

  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680).

  • Gretton, A., Borgwardt, K.M., Rasch, M., Schölkopf, B., & Smola, A.J. (2007). A kernel method for the two-sample-problem. In Advances in neural information processing systems (pp. 513–520).

  • Haeusser, P., Frerix, T., Mordvintsev, A., & Cremers, D. (2017). Associative domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2765–2773).

  • Harandi, M., Salzmann, M., & Hartley, R. (2017). Dimensionality reduction on spd manifolds: The emergence of geometry-aware methods. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).

  • Herath, S., Harandi, M., & Porikli, F. (2017). Learning an invariant hilbert space for domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3845–3854).

  • Hoffman, J., Tzeng, E., Park, T., Zhu, J.-Y., Isola, P., Saenko, K., Efros, A., & Darrell, T. (2018). CyCADA: Cycle-consistent adversarial domain adaptation. In: Dy, J., Krause, A. (Eds.), Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research (Vol. 80, pp. 1989–1998). PMLR, Stockholmsmässan, Stockholm Sweden. http://proceedings.mlr.press/v80/hoffman18a.html

  • Hu, L., Kan, M., Shan, S., & Chen, X. (2020). Unsupervised domain adaptation with hierarchical gradient synchronization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4043–4052).

  • Hull, J. J. (1994). A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5), 550–554. https://doi.org/10.1109/34.291440

    Article  Google Scholar 

  • Jhuo, I.-H., Liu, D., Lee, D., & Chang, S.-F. (2012). Robust visual domain adaptation with low-rank reconstruction. In 2012 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2168–2175). IEEE.

  • Karbalayghareh, A., Qian, X., & Dougherty, E. R. (2018). Optimal Bayesian transfer learning. IEEE Transactions on Signal Processing, 66(14), 3724–3739.

    Article  MathSciNet  Google Scholar 

  • Kifer, D., Ben-David, S., & Gehrke, J. (2004). Detecting change in data streams. In Proceedings of the Thirtieth International Conference on Very Large Data bases-Volume 30 (pp. 180–191). VLDB Endowment.

  • Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).

  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.

    Article  Google Scholar 

  • Liang, J., He, R., Sun, Z., & Tan, T. (2018). Aggregating randomized clustering-promoting invariant projections for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Li, J., **g, M., Lu, K., Zhu, L., & Shen, H. T. (2019). Locality preserving joint transfer for domain adaptation. IEEE Transactions on Image Processing, 28(12), 6103–6115.

    Article  MathSciNet  Google Scholar 

  • Liu, Y., Tu, W., Du, B., Zhang, L., & Tao, D. (2019). Homologous component analysis for domain adaptation. IEEE Transactions on Image Processing, 29, 1074–1089.

    Article  MathSciNet  Google Scholar 

  • Long, M., Cao, Y., Wang, J., & Jordan, M.I. (2015). Learning transferable features with deep adaptation networks. In ICML (pp. 97–105).

  • Long, M., Cao, Z., Wang, J., & Jordan, M.I. (2018). Conditional adversarial domain adaptation. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (Eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada (pp. 1647–1657). https://proceedings.neurips.cc/paper/2018/hash/ab88b15733f543179858600245108dd8-Abstract.html

  • Long, M., Wang, J., Ding, G., Sun, J., & Yu, P.S. (2013). Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 2200–2207).

  • Long, M., Wang, J., Ding, G., Sun, J., & Yu, P. S. (2014). Transfer joint matching for unsupervised domain adaptation. In 2014 IEEE conference on computer vision and pattern recognition, CVPR 2014, Columbus, OH, USA, June 23–28, 2014 (pp. 1410–1417). https://doi.org/10.1109/CVPR.2014.183.

  • Long, M., Zhu, H., Wang, J., & Jordan, M.I. (2017). Deep transfer learning with joint adaptation networks. In Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017 (pp. 2208–2217). http://proceedings.mlr.press/v70/long17a.html

  • Long, M., Wang, J., Ding, G., Pan, S. J., & Yu, P. S. (2014). Adaptation regularization: A general framework for transfer learning. IEEE Transactions on Knowledge and Data Engineering, 26(5), 1076–1089. https://doi.org/10.1109/TKDE.2013.111

    Article  Google Scholar 

  • Lu, Y., Luo, L., Huang, D., Wang, Y., & Chen, L. (2020). Knowledge transfer in vision recognition: A survey. ACM Computing Survey. https://doi.org/10.1145/3379344

    Article  Google Scholar 

  • Lu, H., Shen, C., Cao, Z., **ao, Y., & van den Hengel, A. (2018). An embarrassingly simple approach to visual domain adaptation. IEEE Transactions on Image Processing.

  • Luo, L., Chen, L., Hu, S., Lu, Y., & Wang, X. (2017). Discriminative and geometry aware unsupervised domain adaptation. CoRR ar**v:1712.10042

  • Luo, L., Chen, L., Hu, S., Lu, Y., & Wang, X. (2020). Discriminative and geometry-aware unsupervised domain adaptation. IEEE Transactions on Cybernetics.

  • Luo, L., Wang, X., Hu, S., & Chen, L. (2017). Robust data geometric structure aligned close yet discriminative domain adaptation. CoRR ar**v:1705.08620

  • Luo, L., Chen, L., & Hu, S. (2022). Attention regularized laplace graph for domain adaptation. IEEE Transactions on Image Processing, 31, 7322–7337.

    Article  Google Scholar 

  • Nie, F., Yuan, J., & Huang, H. (2014). Optimal mean robust principal component analysis. In International conference on machine learning (pp. 1062–1070).

  • Pan, S. J., Tsang, I. W., Kwok, J. T., & Yang, Q. (2011). Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2), 199–210.

    Article  Google Scholar 

  • Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.

    Article  Google Scholar 

  • Patel, V. M., Gopalan, R., Li, R., & Chellappa, R. (2015). Visual domain adaptation: A survey of recent advances. IEEE Signal Processing Magazine, 32(3), 53–69. https://doi.org/10.1109/MSP.2014.2347059

    Article  Google Scholar 

  • Pei, Z., Cao, Z., Long, M., & Wang, J. (2018). Multi-adversarial domain adaptation. In Thirty-second AAAI conference on artificial intelligence.

  • Peng, X., Bai, Q., **a, X., Huang, Z., Saenko, K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1406–1415).

  • Rozantsev, A., Salzmann, M., & Fua, P. (2018). Beyond sharing weights for deep domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Saito, K., Ushiku, Y., & Harada, T. (2017). Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017 (pp. 2988–2997). http://proceedings.mlr.press/v70/saito17a.html

  • Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3723–3732).

  • Schölkopf, B., Smola, A. J., & Müller, K. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5), 1299–1319. https://doi.org/10.1162/089976698300017467

  • Sener, O., Song, H.O., Saxena, A., & Savarese, S. (2016). Learning transferrable representations for unsupervised domain adaptation. In Advances in neural information processing systems (pp. 2110–2118).

  • Shao, M., Kit, D., & Fu, Y. (2014). Generalized transfer subspace learning through low-rank constraint. International Journal of Computer Vision, 109(1–2), 74–93. https://doi.org/10.1007/s11263-014-0696-6

    Article  MathSciNet  Google Scholar 

  • Si, S., Tao, D., & Geng, B. (2010). Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering, 22(7), 929–942.

    Article  Google Scholar 

  • Si, S., Tao, D., & Geng, B. (2010). Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering, 22(7), 929–942. https://doi.org/10.1109/TKDE.2009.126

    Article  Google Scholar 

  • Sun, B., & Saenko, K. (2015). Subspace distribution alignment for unsupervised domain adaptation. In: **e, X., Jones, M. W., Tam, G. K. L. (Eds.) Proceedings of the British Machine Vision Conference 2015, BMVC 2015, Swansea, UK, September 7-10, 2015 (pp. 24–12410). BMVA Press. https://doi.org/10.5244/C.29.24

  • Sun, B., & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In European Conference on Computer Vision (pp. 443–450). Springer.

  • Sun, B., Feng, J., & Saenko, K. (2016). Return of frustratingly easy domain adaptation. In: AAAI (Vol. 6, p. 8).

  • Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Computer Vision and Pattern Recognition (CVPR) (Vol. 1, p. 4).

  • Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., & Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. CoRR ar**v:1412.3474.

  • Uzair, M., & Mian, A. S. (2017). Blind domain adaptation with augmented extreme learning machine features. IEEE Transaction on Cybernetics, 47(3), 651–660. https://doi.org/10.1109/TCYB.2016.2523538

    Article  Google Scholar 

  • Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. ar**v:1706.07522

  • Wang, J., Feng, W., Chen, Y., Yu, H., Huang, M., & Yu, P.S. (2018). Visual domain adaptation with manifold embedded distribution alignment. In 2018 ACM multimedia conference on multimedia conference (pp. 402–410). ACM.

  • Wang, X., Li, L., Ye, W., Long, M., & Wang, J. (2019). Transferable attention for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, pp. 5345–5352).

  • Wang, H., Wang, W., Zhang, C., & Xu, F. (2014). Cross-domain metric learning based on information theory. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Québec City, Québec, Canada (pp. 2099–2105). http://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8325

  • Wei, G., Lan, C., Zeng, W., & Chen, Z. (2021). Toalign: Task-oriented alignment for unsupervised domain adaptation. ar**v:2106.10812

  • Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., & Ma, Y. (2008). Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2), 210–227.

    Article  Google Scholar 

  • Xu, Y., Fang, X., Wu, J., Li, X., & Zhang, D. (2016). Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Transactions on Image Processing, 25(2), 850–863. https://doi.org/10.1109/TIP.2015.2510498

  • Yang, Y., & Soatto, S. (2020). Fda: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4085–4095).

  • Zhang, J., Li, W., & Ogunbona, P. (2017). Joint geometrical and statistical alignment for visual domain adaptation. In The IEEE conference on computer vision and pattern recognition (CVPR).

  • Zhao, S., Li, B., Yue, X., Gu, Y., Xu, P., Hu, R., Chai, H., & Keutzer, K. (2019). Multi-source domain adaptation for semantic segmentation. In Advances in neural information processing systems (pp. 7285–7298).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiqiang Hu.

Additional information

Communicated by Dengxin Dai.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by the National Natural Science Foundation of China (61773262, 62006152), and the China Aviation Science Foundation (2022Z071057002, 20142057006). Prof. Liming Chen in this work was in part supported by the French Research Agency, l’Agence Nationale de Recherche (ANR), through the projects Learn Real (ANR-18-CHR3-0002-01), Chiron (ANR-20-IADJ-0001-01), Aristotle (ANR-21-FAI1-0009-01), as well as the French national investment prioritary program PSPC FAIR WASTE project.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, L., Hu, S. & Chen, L. Discriminative Noise Robust Sparse Orthogonal Label Regression-Based Domain Adaptation. Int J Comput Vis 132, 161–184 (2024). https://doi.org/10.1007/s11263-023-01865-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-023-01865-z

Keywords

Navigation