其他分享
首页 > 其他分享> > 【论文笔记】迁移自适应学习综述

【论文笔记】迁移自适应学习综述

作者:互联网

论文题目:《Transfer Adaptation Learning: A Decade Survey》

论文作者:Lei Zhang

论文链接http://cn.arxiv.org/pdf/1903.04687.pdf


介绍

在很多实际的情况中, 源域(source domain)目标域(target domain)之间存在:

独立同分布(independent identical distribution, i.i.d)的假设不再满足!


实例权重调整自适应

当训练集和测试集来自不同分布时, 这通常被称为采样选择偏差(sample selection bias)或者协方差偏移(covariant shift).

实例权重调整方法旨在通过非参数方式对跨域特征分布匹配直接推断出重采样的权重.

基于直觉的权重调整

直接对原始数据进行权重调整.

首次提出于NLP领域1, 主要的方法有著名的TrAdaBoost2.

基于核映射的权重调整

将原始数据映射到高维空间(如,再生核希尔伯特空间RKHS)中进行权重调整.

分布匹配

主要思想是通过重新采样源数据的权重来匹配再生核希尔伯特空间中源数据和目标数据之间的均值.

主要有两种非参数统计量来衡量分布差异:

\[ \begin{array}{l} {\min \limits_{\beta}\left\|E_{x^{\prime} \sim P_{r}^{\prime}}\left[\Phi\left(x^{\prime}\right)\right]-E_{x \sim P_{r}}[\beta(x) \Phi(x)]\right\|} \\ {\text {s.t.} \quad \beta(x) \geq 0, E_{x \sim P_{r}}[\beta(x)]=1} \end{array} \]

Huang等人3首次提出通过调整源样本的\(\beta\)权重系数, 使得带权源样本和目标样本的KMM最小.

\[ d_{\mathcal{H}}^{2}\left(\mathcal{D}_{s}, \mathcal{D}_{t}\right)=\left\|\frac{1}{M} \sum_{i=1}^{M} \phi\left(x_{i}^{s}\right)-\frac{1}{N} \sum_{j=1}^{N} \phi\left(x_{j}^{t}\right)\right\|_{\mathcal{H}}^{2} \]

weighted MMD6方法考虑了类别权重偏差.

样本选择

主要方法有基于k-means聚类的KMapWeighted7, 基于MMD和\(\ell_{2,1}\)-norm的TJM8等.

协同训练

主要思想是假设数据集被表征为两个不同的视角, 使两个分类器独立地从每个视角中进行学习.

主要方法有CODA9, 以及基于GAN的RANN10等.


特征自适应

特征自适应方法旨在寻找多源数据(multiple sources)的共同特征表示.

基于特征子空间

该方法假设数据可以被低维线性子空间进行表示, 即低维的格拉斯曼流形(Grassmann manifold)被嵌入到高维数据中.

通常用PCA方法来构造该流形, 使得源域和目标域可以看成流形上的两个点, 并得到两者的测地线距离(geodesic flow).

基于特征变换

特征变换方法旨在学习变换或投影矩阵,使得源域和目标域中的数据在某种分布度量准则下更接近.

基于投影

该方法通过减少不同域之间的边缘分布和条件分布差异, 求解出最优的投影矩阵.

主要方法有:

基于度量

该方法通过在带标签的源域中学习一个好的距离度量, 使得其能够应用于相关但不同的目标域中.

主要方法有:

基于增强

该方法假设数据的特征被分为三种类型:公共特征/源域特征/目标域特征.

主要方法有:

基于特征重构

主要方法有:

基于特征编码

主要方法有:


分类器自适应

分类器自适应旨在利用源域中带标签数据和目标域中少量带标签数据学习一个通用的分类器.

基于核分类器

主要方法有:

基于流形正则项

主要方法有ARTL30,DMM31,MEDA32等.

基于贝叶斯分类器

主要方法有核贝叶斯迁移学习KBTL33等.


深度网络自适应

2014年, Yosinski等人34讨论了深度神经网络中不同层特征的可迁移特性.

基于边缘分布对齐

主要方法有:

基于条件分布对齐

主要方法有深度迁移网络DTN38

基于自动编码器

主要方法有边缘堆叠式降噪自动编码器mSDA39


对抗式自适应

通过对抗目标(如,域判别器)来减少域间差异.

基于梯度转换

Ganin等人40首次提出可以通过添加一个简单的梯度转换层(gradient reversal layer, GRL)来实现领域自适应.

基于Minimax优化

Ajakan等人41首次结合分类损失和对抗目标, 提出了DANN方法.

其它方法还有:

基于生成对抗网络

主要方法有:

基准数据集


参考文献


  1. J. Jiang and C. Zhai, Instance weighting for domain adaptation in nlp, in ACL, 2007, pp. 264–271.

  2. W. Dai, Q. Yang, G. R. Xue, and Y. Yu, Boosting for transfer learning, in ICML, 2007, pp. 193–200.

  3. J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Scholkopf, Correcting sample selection bias by unlabeled data, in NIPS, 2007, pp. 1–8.

  4. A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola, A kernel method for the two-sample-problem, in NIPS, 2006.

  5. A. Gretton, K. Borgwardt, M. Rasch, B. Scholkopf, and A. Smola, A kernel two-sample test, Journal of Machine Learning Research, pp. 723–773, 2012

  6. H. Yan, Y. Ding, P. Li, Q. Wang, Y. Xu, and W. Zuo, Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation, in CVPR, 2017, pp. 2272–2281

  7. E. H. Zhong, W. Fan, J. Peng, K. Zhang, J. Ren, D. S. Turaga, and O. Verscheure, Cross domain distribution adaptation via kernel mapping, in ACM SIGKDD, 2009, pp. 1027–1036.

  8. M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, Transfer joint matching for unsupervised domain adaptation, in CVPR, 2014, pp. 1410–1417.

  9. M. Chen, K. Q. Weinberger, and J. C. Blitzer, Co-training for domain adaptation, in NIPS, 2011.

  10. Q. Chen, Y. Liu, Z. Wang, I. Wassell, and K. Chetty, Re-weighted adversarial adaptation network for unsupervised domain adaptation, in CVPR, 2018, pp. 7976–7985.

  11. R. Gopalan, R. Li, and R. Chellappa, Domain adaptation for object recognition: An unsupervised approach, in ICCV, 2011, pp. 999–1006

  12. B. Gong, Y. Shi, F. Sha, and K. Grauman, Geodesic flow kernel for unsupervised domain adaptation, in CVPR, 2012, pp. 2066–2073

  13. B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars, Unsupervised visual domain adaptation using subspace alignment, in ICCV, 2013, pp. 2960–2967.

  14. B. Sun and K. Saenko, Subspace distribution alignment for unsupervised domain adaptation, in BMVC, 2015, pp. 24.1–24.10.

  15. J. Liu and L. Zhang, Optimal projection guided transfer hashing for image retrieval, in AAAI, 2018.

  16. S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, Domain adaptation via transfer component analysis, IEEE Trans. Neural Networks, vol. 22, no. 2, p. 199, 2011

  17. M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, Transfer feature learning with joint distribution adaptation, in ICCV, 2014, pp. 2200–2207.

  18. S. Si, D. Tao, and B. Geng, Bregman divergence-based regularization for transfer subspace learning, IEEE Trans. Knowledge and Data Engineering, vol. 22, no. 7, pp. 929–942, 2010.

  19. A. Gretton, O. Bousquet, A. Smola, and B. Scholkopf, Measuring statistical dependence with hilbert-schmidt norms, in ALT, 2005.

  20. Z. Ding and Y. Fu, Robust transfer metric learning for image classification, IEEE Trans. Image Processing, vol. 26, no. 2, p. 660670, 2017.

  21. B. Sun, J. Feng, and K. Saenko, Return of frustratingly easy domain adaptation, in AAAI, 2016, pp. 153–171.

  22. H. Daume III, Frustratingly easy domain adaptation, in arXiv,2009.

  23. R. Volpi, P. Morerio, S. Savarese, and V. Murino, Adversarial feature augmentation for unsupervised domain adaptation, in CVPR, 2018, pp. 5495–5504.

  24. I. H. Jhuo, D. Liu, D. T. Lee, and S. F. Chang, Robust visual domain adaptation with low-rank reconstruction, in CVPR, 2012, pp. 2168–2175.

  25. L. Zhang, W. Zuo, and D. Zhang, Lsdt: Latent sparse domain transfer learning for visual adaptation, IEEE Trans. Image Processing, vol. 25, no. 3, pp. 1177–1191, 2016.

  26. S. Shekhar, V. Patel, H. Nguyen, and R. Chellappa, Generalized domain-adaptive dictionaries, in CVPR, 2013, pp. 361–368.

  27. F. Zhu and L. Shao, Weakly-supervised cross-domain dictionary learning for visual recognition, International Journal of Computer Vision, vol. 109, no. 1-2, pp. 42–59, 2014.

  28. J. Yang, R. Yan, and A. G. Hauptmann, Cross-domain video concept detection using adaptive svms, in ACM MM, 2007, pp. 188–197.

  29. L. Duan, I. Tsang, D. Xu, and S. Maybank, Domain transfer svm for video concept detection, in CVPR, 2009

  30. M. Long, J. Wang, G. Ding, S. Pan, and P. Yu, Adaptation regularization: a general framework for transfer learning, IEEE Trans. Knowledge and Data Engineering, vol. 26, no. 5, p. 10761089, 2014.

  31. Y. Cao, M. Long, and J. Wang, Unsupervised domain adaptation with distribution matching machines, in AAAI, 2018

  32. J. Wang, W. Feng, Y. Chen, H. Yu, M. Huang, and P. S. Yu, Visual domain adaptation with manifold embedded distribution alignment, 2018.

  33. M. Gonen and A. Margolin, Kernelized bayesian transfer learning, in AAAI, 2014, pp. 1831–1839.

  34. J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, How transferable are features in deep neural networks, in NIPS, 2014.

  35. E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, Deep domain confusion: Maximizing for domain invariance, arXiv, 2014

  36. M. Long, Y. Cao, J. Wang, and M. I. Jordan, Learning transferable features with deep adaptation networks, in ICML, 2015, pp. 97–105.

  37. M. Long, H. Zhu, J. Wang, and M. Jordan, Deep transfer learning with joint adaptation networks, in ICML, 2017.

  38. X. Zhang, F. Yu, S. Wang, and S. Chang, Deep transfer network: Unsupervised domain adaptation, in arXiv, 2015.

  39. M. Chen, Z. Xu, K. Weinberger, and F. Sha, Marginalized denoising autoencoders for domain adaptation, in ICML, 2012

  40. Y. Ganin and V. Lempitsky, Unsupervised domain adaptation by backpropagation, in arXiv, 2015.

  41. H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and M. Marchand, Domain-adversarial neural network, in arXiv, 2015

  42. E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, Adversarial discriminative domain adaptation, in CVPR, 2017, pp. 7167–7176

  43. M. Long, Z. Cao, J. Wang, and M. I. Jordan, Conditional adversarial domain adaptation, in NIPS, 2018.

  44. K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, Maximum classifier discrepancy for unsupervised domain adaptation, in CVPR, 2018, pp. 3723–3732.

  45. J. Hoffman, E. Tzeng, T. Park, and J. Zhu, Cycada: Cycleconsistent adversarial domain adaptation, in ICML, 2018.

  46. L. Hu, M. Kan, S. Shan, and X. Chen, Duplex generative adversarial network for unsupervised domain adaptation, in CVPR, 2018, pp. 1498–1507.

标签:基于,domain,综述,transfer,pp,笔记,adaptation,迁移,方法
来源: https://www.cnblogs.com/orzyt/p/10614105.html